Evidence-Based Security: Christopher Frenz & Jonathan Reiber
Evidence-Based Security: Christopher Frenz & Jonathan Reiber
Evidence-Based Security: Christopher Frenz & Jonathan Reiber
m
pl
im
en
ts
of
Evidence-Based
Security
Christopher Frenz
& Jonathan Reiber
REPORT
The Industry’s #1
Breach and Attack
Simulation Platform
Be confident in your security posture with
continuous, automated security control
validation—in production and at scale.
www.attackiq.com
Evidence-Based Security
978-1-098-14891-1
[LSI]
Table of Contents
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
iii
Introduction
1 Gary Hayslip, “The CISO Job and Its Short Tenure”, Forbes, February 10, 2020.
v
unify the way they investigate, produce objective evidence, and
make better-informed decisions about how to improve control
effectivity. This process encompasses controls on the individual and
more fine-grained level, the overall performance of security archi‐
tectures, and security programs as a whole. In a world where grow‐
ing IT complexity is often labeled as the enemy of security, an
evidence-based methodology helps organizations increase visibility
and control, decrease risk and uncertainty, and unify stakeholders
with a common understanding of how well their organizations are
protected against relevant threats.
Building evidence-based security programs requires a common lan‐
guage that all security professionals can access and work with, no
matter the size of their organization or security budget. One great
way to develop a threat-informed defense strategy is by using the
MITRE ATT&CK®2 framework. This publicly available framework
includes a vast knowledge base of adversarial techniques, tactics,
and procedures (TTPs) that anyone can access on the web and use
for their own threat modeling. Using this common language, num‐
bered TTPs and the mitigation information available on each one
have been a way for security professionals across the globe to work
in a more standardized fashion and achieve better results in their
threat intelligence efforts, detection engineering, incident triage, and
implementation of adequate controls.
Leveraging the common language provided by the ATT&CK frame‐
work, security teams can work with its documented—and continu‐
ally updated—adversary behaviors and mitigations, relying on it at
the first stage of an evidence-based security framework. The latter is
at the core of driving efficiency in your organization’s security pro‐
gram and the ways by which it can be measurably improved over
time. The six steps of the evidence-based security framework are:
2 MITRE ATT&CK is a globally accessible knowledge base of adversary tactics and tech‐
niques based on real-world observations.
vi | Introduction
5. Remediating
6. Repeating
Introduction | vii
CHAPTER 1
Protecting Data and the Places
Where Data Resides
1 “How Responsible Data Can Help Us Navigate the Economic Crisis”, World Economic
Forum, January 17, 2023.
1
advisory entities, and commercial solutions to protect data and to
counter the many attacks that can compromise its desired state.
Consequently, organizations around the world spent around USD
$150 billion in 2021 on cybersecurity,2 and this expense is growing
by 12.4% annually. Meanwhile, the global cybersecurity market is
projected to continue to grow from USD $155.83 billion in 2022 to
USD $376.32 billion by 2029, exhibiting a compound annual growth
rate (CAGR) of 13.4%.3
These growth statistics are quite impressive. But the paradox is that
even though organizations and their security teams are spending
more money each year on cybersecurity, successful data breaches
and other cyberattacks (and the resulting costly fallouts) also con‐
tinue to increase and worsen over time.
2 Bharath Aiyer et al., “New Survey Reveals $2 Trillion Market Opportunity for Cyberse‐
curity Technology and Service Providers”, McKinsey & Company, October 27, 2022.
3 “Cyber Security Market Size, Share & COVID-19 Impact Analysis”, Fortune Business
Insights, April 2023.
4 IBM Security, “Cost of a Data Breach: Report 2022”, 2022.
5 IBM Security, “Cost of a Data Breach: Report 2022”, 2022.
6 “IBM Security X-Force Threat Intelligence Index 2023”, IBM, accessed April 17, 2023.
7 Tara Siegel Bernard, Tiffany Hsu, Nicole Perlroth, and Ron Lieber, “Equifax Says Cyberat‐
tack May Have Affected 143 Million in the U.S.”, New York Times, September 7, 2017.
8 “Ending the Era of Security Control Failure”, AttackIQ, accessed April 17, 2023.
• Preventive controls
• Detective controls
• Corrective controls
Drilling down into each item on this list covers every type of mitiga‐
tion measure we typically employ to help reduce the risk of a breach
of digitized environments and data impact. Controls can exist on the
physical plane, be technical, or be administrative (see Table 1-1).
9 “Security and Vulnerability Management Market Projected to Reach USD 21.38 Billion
by 2028—Exclusive Report by Brandessence Market Research”, Cision PR Newswire,
July 7, 2022.
10 The NVD is the US government repository of standards-based vulnerability manage‐
ment data.
11 Common Vulnerability Scoring System (CVSS) is a method used to supply a qualitative
measure of severity. See the National Vulnerability Database.
12 See the NVD Common Vulnerability Scoring System Calculator.
15 Liam Tung, “We’re Still Making Terrible Choices with Passwords, Even Though We
Know Better”, ZD Net, September 24, 2021.
16 Natasha Piñon, “Hackers Guessed the World’s Most Common Password in Under 1
Second—Make Sure Yours Isn’t on the List”, CNBC, November 23, 2022.
11
knowledge base, attacker TTPs are categorized by attacker objec‐
tives, making it easier to look for relevant TTPs. Each is also num‐
bered so that defenders can refer to the same item in their proactive
plans and response activities. There are 14 areas (columns) for tac‐
tics, and under each tactic are a variety of techniques and subtechni‐
ques defenders can explore. At the technique level, defenders can
also find a detect/mitigate section that can help cut through to effec‐
tive control measures specific to each attacker technique.
Simulate the threat’s effect on your infrastructure and networks. This allows you to
collect the metrics you developed to get real-world insight into the damage a
particular threat could do and to evaluate the efficacy of the applicable controls.
To test complex attack scenarios, a breach and attack simulation (BAS) tool can be
considered.
4. Analyze Analyze the data to identify any deficiencies and possible changes that could be
implemented to improve security.
2 View the catalog of enterprise tactics. For information on tactic TA0004, privilege esca‐
lation, see https://oreil.ly/4QDRj.
Group Policy Objects (GPOs) help security teams govern security set‐
tings and manage the user population’s permissions in a centralized
and standardized way. As such, administrative control over group
policies enforces them across the entire domain, which directly
impacts organizational security levels. Using least privilege here is
foundational—providing this type of access to as few people as possi‐
ble. Admin accounts should be configured for security and isolation,
then they should be monitored and their activity logged (including
enabling PowerShell auditing via Registry or GPO). All logs can be
consumed by a security information and event management (SIEM)
solution, and alerts can be created for suspicious events.
The instructions presented for each identified technique are often spe‐
cific enough to enable security professionals to improve control
efficacy.
This initial step of mapping threats to known TTPs is the first and pre‐
paratory step in working with the evidence-based security framework.
Next, we will go through the entire framework with another example.
6 Jai Vijayan, “Attackers Were on Network for 2 Years, News Corp Says”, Dark Reading,
February 27, 2023.
Step 3: Simulate
The third step entails simulating an attack to measure how the
selected controls detect the TTPs we are testing against. For this
example, one way to check whether egress filtering is adequate is to
test for ports that allow outbound traffic. An easy way to do that is
to use Nmap, a free network scanner. This quick test does not
replace fully auditing firewall rules, but it can be a fast way to know
if the filtering is not effective.
We would begin by initiating an Nmap port scan on all TCP ports
(65,535 in total) from an internal host to an internet host.7 One could
test UDP ports as well, but the TCP scan is where we begin. We can
run Nmap against allports.exposed, which is a free service that will
respond on all 65,000+ ports, and thus any port that generates a
response to the scan is one that remains unfiltered by the firewall.
Step 4: Analyze
Using the results of recurring scans, one can enumerate and identify
overlooked open ports, then take action to close them, initiate strin‐
gent monitoring, and/or protect them with additional controls, fil‐
tering rules/policies, etc.
Step 5: Remediate
This step is about remediation and reducing risk. Here we would take
concrete actions to ensure ports that allow egress traffic are moni‐
tored, and those not intended to allow outbound traffic are closed or
otherwise protected. If the firewall is blocking traffic from/to impor‐
tant ports, that issue can also be remediated after the scan.
For organizations that use next-gen firewalls, it would be easy to
enforce application types and protocols that would be allowed to
7 Nmap is one way to scan for open ports. One can also opt to use PowerShell to do the
same.
Step 6: Repeat
After remediation actions are taken, attack simulation can be
repeated to generate another results report and compare it to the
previous one. In our example, testing and retesting will give a better
indication as to the efficacy of egress filtering than just having it in
place without cyclical testing. Keep in mind that in most cases, com‐
panies do have perimeter controls (application-level controls) in
place that can detect open ports and include that information in on-
demand reports.
Big picture: using the evidence-based security framework to increase
control effectivity can produce more certainty as to the actual secu‐
rity posture of the organization. When ports are regularly scanned
and are monitored properly, with metrics covering violation events
and patterns, risks are either known or mitigated, and the overall
risk to the organization is lower.
Working with this approach in mind can incrementally lower risk
over time, but it is quite granular to look at a small number of TTPs
each time. What happens when one must test elaborate attack flows
that adversaries often deploy? The answer is automation by breach
and attack simulation (BAS) platforms and tools, a term coined by
Gartner.8
8 “Breach and Attack Simulation (BAS) Tools Reviews and Ratings”, Gartner, accessed
April 17, 2023.
19
other parts of the organization to accelerate boosting defenses and
reducing risk.
Continuous Validation
One of the biggest advantages of using BAS tools is the ability to run
tests repeatedly for continuous validation. This capability allows
security teams to harden their environments and extend their ability
to manage security across complex infrastructures continually and
efficiently. Continuous validation also means access to data-backed
insight into the true state of the organization’s security program and
measurable performance of security controls. Looking at the bigger
picture of the security program from the viewpoint of the executive
suite, on-demand validation ties back to the circular nature of
evidence-based security. By providing data on the state of controls
and their ability to detect and prevent attacks, security leaders can
prove the efficacy of their risk mitigation strategy and justify present
and future security investments.
2 “We Analysed 90,000+ Software Vulnerabilities: Here’s What We Learned”, The Stack,
January 9, 2023.
25
traffic?) and why a visitor would visit today but never return to that
website. In the security department, the same goes for metrics like
number of alerts logged or number of spam emails caught in the fil‐
ter. These metrics are easier to generate with security tools, but they
don’t provide contextual value to act upon, and the lack of value can
translate to a security blind spot.
Let’s look at one way where common vanity metrics can be damag‐
ing to organizational security with an example of alerts generated by
a security information and event management (SIEM) solution.
Vanity metrics from the SIEM would involve looking at the total
number of alerts, then creating a funnel view to show that most were
linked to one type of violation. A third metric would show how
many issues were left at the end of each day or week, which could
appear like most problems were solved. While these metrics show
that the team is working on alerts, they do not give risk context.
How many alerts ended up being tied to an incident? What was the
severity of the issues the team found? How quickly and how well did
the team remediate them, and how do the results impact overall risk
levels? Vanity metrics may provide some limited value but lack the
granularity that would help make them actionable. Choosing better
metrics can help translate them into actions and then better results
for the security program.
• Because we don’t detect those early TTPs, we don’t find out about
the intrusion until way later in the attack, which is ineffective.
• We are not improving on the MTTD because we don’t have the
right controls in place.
• If we enhanced our identity access controls, added segmenta‐
tion, and added three new PowerShell rules to our endpoint
security tool, we would be able to detect those initial foothold
attempts earlier in the attack flow.
1 IBM Institute for Business Value, Beyond the Boom: Improving Decision Making in a Secu‐
rity Crisis, January 2018.
What executives are going to look for is how much did it cost us to
use the security controls related to this effort, what did it cost for us
to diagnose and contain this incident, and what cost did we avoid.
More often than not, that equation must be in favor of the security
team, showing that the cost to prevent attacks continues to justify
itself.
33
organizations undertake a continuous monitoring program (ISCM)1
to collect information readily available from security controls and
measure it against pre-established metrics. But while this is one step
in the right direction, it is not sufficient. Owning a security control
and seeing it operating within a pre-established range does not nec‐
essarily mean it will remain effective during an attack scenario.
Security cannot be assumed. To know how controls hold up against
adversarial tactics, they have to be tested in relevant scenarios, and
the results then become the data that’s most meaningful to validat‐
ing the security program’s effectiveness in reducing risk.
Addressing information security risk is addressing business risk,
and the effectiveness of the security program to keep risk within the
desired bounds should be expressed in metrics that serve the entire
organization. While some metrics are only applicable to the SOC or
the wider security team, additional metrics must be developed for
other stakeholders and upper management. Those will serve to
define the success of the security program in its role to mitigate the
right risks that best serve business goals.
At the end of the day, businesses exist to thrive in the face of change,
to grow and innovate. Information security must remain an enabler
of those goals and continue to showcase its worth and ROI through
measurably better security.