Evidence-Based Security: Christopher Frenz & Jonathan Reiber

Download as pdf or txt
Download as pdf or txt
You are on page 1of 45

Co

m
pl
im
en
ts
of
Evidence-Based
Security
Christopher Frenz
& Jonathan Reiber

REPORT
The Industry’s #1
Breach and Attack
Simulation Platform
Be confident in your security posture with
continuous, automated security control
validation—in production and at scale.

www.attackiq.com
Evidence-Based Security

Christopher Frenz and Jonathan Reiber

Beijing Boston Farnham Sebastopol Tokyo


Evidence-Based Security
by Christopher Frenz and Jonathan Reiber
Copyright © 2023 O’Reilly Media Inc. All rights reserved.
Printed in the United States of America.
Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA
95472.
O’Reilly books may be purchased for educational, business, or sales promotional use.
Online editions are also available for most titles (http://oreilly.com). For more infor‐
mation, contact our corporate/institutional sales department: 800-998-9938 or
[email protected].

Acquisitions Editor: Simina Calin Proofreader: Clare Laylock


Development Editor: Charlotte Ames Interior Designer: David Futato
Production Editor: Clare Laylock Cover Designer: Randy Comer
Copyeditor: nSight, Inc. Illustrator: Kate Dullea

May 2023: First Edition

Revision History for the First Edition


2023-05-12: First Release

The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. Evidence-Based


Security, the cover image, and related trade dress are trademarks of O’Reilly Media,
Inc.
The views expressed in this work are those of the authors and do not represent the
publisher’s views. While the publisher and the authors have used good faith efforts
to ensure that the information and instructions contained in this work are accurate,
the publisher and the authors disclaim all responsibility for errors or omissions,
including without limitation responsibility for damages resulting from the use of or
reliance on this work. Use of the information and instructions contained in this
work is at your own risk. If any code samples or other technology this work contains
or describes is subject to open source licenses or the intellectual property rights of
others, it is your responsibility to ensure that your use thereof complies with such
licenses and/or rights.
This work is part of a collaboration between O’Reilly and AttackIQ. See our state‐
ment of editorial independence.

978-1-098-14891-1
[LSI]
Table of Contents

Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

1. Protecting Data and the Places Where Data Resides. . . . . . . . . . . . . . 1


More Security Is Not Always More Protection 2
We Are Compliant, So Why Are We Not Secure? 6
The Need to Justify Security Spend 8

2. The Evidence-Based Security Framework. . . . . . . . . . . . . . . . . . . . . . 11


Speaking a Common Language 11
The Six Steps of the Evidence-Based Security Framework 12
Using the Evidence-Based Security Framework to Measure
Control Efficiency 14

3. Breach and Attack Simulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19


The Advantages of BAS 20
The Three Stages of BAS 21
Cutting Back on Risk with Zero Trust: A Case Study 22
Testing for Risk: An Example 23

4. Evidence-Based Security and Metrics That Matter. . . . . . . . . . . . . . . 25


Vanity Metrics Versus Actionable Metrics 25
KPI and KRI Metrics 26
Focusing on Actionable Metrics with the Evidence-Based
Security Framework 28
The Metrics View from the C-Suite 30

Conclusion: Moving Toward Measurably Better Security. . . . . . . . . . . . . 33

iii
Introduction

When it comes to cybersecurity, it has become more than evident in


the past few decades that we, as a society, are in a perpetual arms
race, which often makes it hard to discern if we could ever “win.” All
too often, organizations deploy costly security controls to defend
their environments from cybersecurity threats, not knowing
whether these controls are in fact effective or measure their ability to
provide defense. Looking at annual reports from analyst firms and
market researchers, we see that security spending has been growing
exponentially, but organizations don’t have enough to show for it.
Security leaders called on to justify their spending and demonstrate
their success in mitigating risk often struggle to do so, leaving their
jobs within four years on average,1 many times due to burnout or in
the aftermath of data breaches and other attacks. These factors
together point to a need to manage security in a way that can be
more structured and measurable. Companies should be able to jus‐
tify costs and expectations and, through this effort, truly bolster
security from the inside out.
This report gives an overview of evidence-based security. It covers a
hands-on framework that allows security professionals to make
data-informed decisions about the people, technology, and pro‐
cesses that underpin the effectivity of their organizational security
programs.
Applying an evidence-based approach to security means regarding
security with a scientific lens. It enables security professionals to

1 Gary Hayslip, “The CISO Job and Its Short Tenure”, Forbes, February 10, 2020.

v
unify the way they investigate, produce objective evidence, and
make better-informed decisions about how to improve control
effectivity. This process encompasses controls on the individual and
more fine-grained level, the overall performance of security archi‐
tectures, and security programs as a whole. In a world where grow‐
ing IT complexity is often labeled as the enemy of security, an
evidence-based methodology helps organizations increase visibility
and control, decrease risk and uncertainty, and unify stakeholders
with a common understanding of how well their organizations are
protected against relevant threats.
Building evidence-based security programs requires a common lan‐
guage that all security professionals can access and work with, no
matter the size of their organization or security budget. One great
way to develop a threat-informed defense strategy is by using the
MITRE ATT&CK®2 framework. This publicly available framework
includes a vast knowledge base of adversarial techniques, tactics,
and procedures (TTPs) that anyone can access on the web and use
for their own threat modeling. Using this common language, num‐
bered TTPs and the mitigation information available on each one
have been a way for security professionals across the globe to work
in a more standardized fashion and achieve better results in their
threat intelligence efforts, detection engineering, incident triage, and
implementation of adequate controls.
Leveraging the common language provided by the ATT&CK frame‐
work, security teams can work with its documented—and continu‐
ally updated—adversary behaviors and mitigations, relying on it at
the first stage of an evidence-based security framework. The latter is
at the core of driving efficiency in your organization’s security pro‐
gram and the ways by which it can be measurably improved over
time. The six steps of the evidence-based security framework are:

1. Mapping threats to TTPs (with ATT&CK as a common


language)
2. Developing metrics
3. Simulating attacks
4. Analyzing results

2 MITRE ATT&CK is a globally accessible knowledge base of adversary tactics and tech‐
niques based on real-world observations.

vi | Introduction
5. Remediating
6. Repeating

Read on to learn more about the evidence-based security frame‐


work, including how to use it to make the approach to security a sci‐
entific one and what measurable results you can show for it.

Introduction | vii
CHAPTER 1
Protecting Data and the Places
Where Data Resides

The World Economic Forum estimated that by 2025, individuals


and companies around the world will produce an estimated 463 exa‐
bytes of data each day.1 This ever-growing number includes every
sort of data, a lot of which we use every day in emails, ecommerce,
business activity, and the daily operations of the world we live in.
Our economy and society have evolved to depend on data and to use
it in every aspect of our daily lives—creating both tremendous
opportunities and tremendous peril.
This reliance on data, and on the sciences that analyze and utilize it, has
become integral to everything we do, making data one of the most valua‐
ble assets businesses must safeguard. Since data collected through activi‐
ties involving business, industry, and consumers has become as valuable
as it is plentiful, keeping it moving, resting, and interactive at scale is one
of the world’s most significant technological challenges. To secure digi‐
tized environments and data, the domains of information security and
cybersecurity have been rapidly evolving in lockstep, running a perpetual
race against a growing list of criminal and nation-state-sponsored adver‐
saries that aim to use the same data for their own nefarious purposes.
In response to this constantly evolving state of affairs, governments
and organizations tirelessly develop new frameworks, regulations,

1 “How Responsible Data Can Help Us Navigate the Economic Crisis”, World Economic
Forum, January 17, 2023.

1
advisory entities, and commercial solutions to protect data and to
counter the many attacks that can compromise its desired state.
Consequently, organizations around the world spent around USD
$150 billion in 2021 on cybersecurity,2 and this expense is growing
by 12.4% annually. Meanwhile, the global cybersecurity market is
projected to continue to grow from USD $155.83 billion in 2022 to
USD $376.32 billion by 2029, exhibiting a compound annual growth
rate (CAGR) of 13.4%.3
These growth statistics are quite impressive. But the paradox is that
even though organizations and their security teams are spending
more money each year on cybersecurity, successful data breaches
and other cyberattacks (and the resulting costly fallouts) also con‐
tinue to increase and worsen over time.

More Security Is Not Always More Protection


The cost of cyberattacks to victim organizations has been rising over
the years,4 largely due to the delayed ability of organizations to
detect attacks quickly and contain them before they progress. On
average, in 2022, organizations across the globe needed 277 days to
identify and contain a data breach,5 highlighting a marked challenge
in matching detection capabilities with the speed with which attack‐
ers move toward their objectives. One industry report released in
2023 found that the time it takes ransomware attackers to achieve
their objectives has gone down 94% between 2019 and 2022.6 What
used to take months now takes mere days, leaving security teams
very little time to find out about attacks before it’s too late.
Simply spending more on security over the years definitely does not
appear to be the answer. The world’s largest and most protected organi‐
zations, including those with hefty cybersecurity budgets and every

2 Bharath Aiyer et al., “New Survey Reveals $2 Trillion Market Opportunity for Cyberse‐
curity Technology and Service Providers”, McKinsey & Company, October 27, 2022.
3 “Cyber Security Market Size, Share & COVID-19 Impact Analysis”, Fortune Business
Insights, April 2023.
4 IBM Security, “Cost of a Data Breach: Report 2022”, 2022.
5 IBM Security, “Cost of a Data Breach: Report 2022”, 2022.
6 “IBM Security X-Force Threat Intelligence Index 2023”, IBM, accessed April 17, 2023.

2 | Chapter 1: Protecting Data and the Places Where Data Resides


cutting-edge technology, suffer major data breaches, often exposing the
personal information of millions of individuals in one fell swoop.7
Looking at the numbers and statistics begs a simple yet loaded ques‐
tion: if we are not seeing marked improvement over time, are we, as
a cybersecurity industry, doing something wrong? And if we are, how
can we course-correct and start moving in the right direction?

Security: More Science, Less Art


Answering the question “Are we doing something wrong?” is not as
straightforward as one would wish it to be. One of the main reasons
is because the global security industry is still lacking on standardiza‐
tion, sharing a common framework for evidence-based planning,
and the implementation of security controls. The industry is there‐
fore challenged when it is called to evaluate, produce a common set
of results, and rework or fine-tune controls in a way that’s proven
across a large enough data set.
Security must be much more of a science than it is an art. If security
teams do not work to standardize the way they operate and to mea‐
sure what does and does not work, we will keep using the same mind‐
set and actions to produce similar results. One way this impacts the
overall security of any organization is that many are unaware of con‐
trol effectivity to the point of not realizing when controls have failed
and are creating exposures the business is not mitigating.

Silent Control Failures at the Root of Cyberattacks


The reality is that every type of security control employed on organ‐
izational infrastructures can, and does, fail either partially or com‐
pletely. In a study that tested endpoint detection and response
(EDR) controls against seven real-world, impactful attacker tech‐
niques, researchers discovered that controls only detected the mali‐
cious actions 39% of the time.8 The remaining 61% was a vast
playing field for attackers to continue to drive their objectives across
organizational networks. Unfortunately, many times, organizations
are not aware of their control gaps for lengthy periods and might

7 Tara Siegel Bernard, Tiffany Hsu, Nicole Perlroth, and Ron Lieber, “Equifax Says Cyberat‐
tack May Have Affected 143 Million in the U.S.”, New York Times, September 7, 2017.
8 “Ending the Era of Security Control Failure”, AttackIQ, accessed April 17, 2023.

More Security Is Not Always More Protection | 3


find out about them only in the throes of an actual cyberattack. In
other cases, they have not implemented them, failing to address the
realistic risk profile of their business activity.
Let’s examine one example that shows how adding a security control
does not automatically translate into adding actual protection. If the
control is present but its effectivity is not proven or measured, it can
generate a false sense of security, all while it is failing to protect the
network. Failed controls eventually lead to a detrimental compromise.
Consider three primary types of security controls that any organiza‐
tion or security professional would have to rely on:

• Preventive controls
• Detective controls
• Corrective controls

Drilling down into each item on this list covers every type of mitiga‐
tion measure we typically employ to help reduce the risk of a breach
of digitized environments and data impact. Controls can exist on the
physical plane, be technical, or be administrative (see Table 1-1).

Table 1-1. Primary security control types


Control type Preventive Detective Corrective
Physical controls Door locks, fence, Surveillance cameras, Revoke access card, revoke
barbwire, access cards alarms, motion access, use better locks,
detectors, biometric revoke biometric ID
scanners
Technical Firewalls, intrusion Intrusion detection Adjust firewall rules, adjust
controls prevention system, system, endpoint DNS rules, patch or
DNS-based detection tools vulnerability management
enforcement
Administrative Password management Review access rights, Revoke and limit admin
controls policies, least privilege audit privileged accounts on the network,
principles, termination accounts, audit force new password creation
policies for departing password lifecycles with new requirements
employees

4 | Chapter 1: Protecting Data and the Places Where Data Resides


Each control category provides an entry point to a plethora of solu‐
tions and tools. These are designed to help organizations grow their
security posture and overall maturity in terms of their ability to pro‐
tect networks, detect adverse activity, and recover from security
incidents.
For example, patch management is a technical corrective control,
part of fundamental cyber hygiene. It provides a protection layer in
a world where the list of software vulnerabilities only keeps growing.
To that effect, the vulnerability management market share is projec‐
ted to reach USD $21.38 billion by 2028,9 driven by a galloping rate
of new vulnerability disclosures annually, of which exploitation
often results in costly breaches and attacks. For measure, the
National Vulnerability Database (NVD) holds 8,051 vulnerabilities
published in Q1 of 2022 alone. This is about a 25% increase from
the same period in 2021.10
A failure of controls in patch management could mean putting off
the patching or skipping it altogether due to an inadequate risk
assessment of a vulnerability to one’s infrastructure, or the misclassi‐
fication of the asset in terms of sensitivity/criticality. A more subtle
failure can result from relying on a vendor or tool’s remediation pri‐
oritization paradigm. Different vulnerability management vendors
and tools vary in how their automated scans and systems grade the
priority to remediate. What can happen is that some of the more
impactful vulnerabilities that are much more likely to be exploited
specifically within the organization’s infrastructure can be depriori‐
tized in favor of those with higher general risk scores.11
This can happen by not calculating the Temporal and Environmen‐
tal Common Vulnerability Scoring System (CVSS) metric groups,12
and by missing the wider risk context that’s very specific to one’s
actual digital environment. The better way to use this control would
be within the context of a corresponding risk assessment that takes

9 “Security and Vulnerability Management Market Projected to Reach USD 21.38 Billion
by 2028—Exclusive Report by Brandessence Market Research”, Cision PR Newswire,
July 7, 2022.
10 The NVD is the US government repository of standards-based vulnerability manage‐
ment data.
11 Common Vulnerability Scoring System (CVSS) is a method used to supply a qualitative
measure of severity. See the National Vulnerability Database.
12 See the NVD Common Vulnerability Scoring System Calculator.

More Security Is Not Always More Protection | 5


into consideration the elements of asset sensitivity/criticality, its
exposures, the threat actors likely to target it, and the exploitability
status of the applicable vulnerabilities.13 With new vulnerabilities
and threat actor techniques evolving constantly, the process of vul‐
nerability management is cyclical, so this risk assessment process
must also be cyclical to keep on top of priority patching.

We Are Compliant, So Why Are We Not Secure?


Efforts to standardize security across sectorial and organizational
risk profiles have led to the rise and enforcement of mandatory min‐
imal security requirements. In some cases, regulatory bodies have
developed and required compliance in the shape of laws to abide by.
One such example is the General Data Protection Regulation
(GDPR), which is an EU law for data protection and privacy. In the
US, the Health Insurance Portability and Accountability Act
(HIPAA) for health-care data is a federal law. Other compliance
schemes can be voluntary, like the Payment Card Industry Data
Security Standard (PCI-DSS) for protecting payment card data and
the Systems and Organization Controls 2 (SOC2), for example.14
The latter is a voluntary information security compliance standard
providing guidelines on how to securely manage customer data
regardless of industry or sector. While compliance schemes do differ
in many aspects—most notably, the legal obligation to remain com‐
pliant—incompliance carries far-reaching business and legal
consequences.
Companies that do not abide by regulatory requirements can be
audited, fined, investigated, and sued, and they can face dire long-
term repercussions to their business. Those who do not adopt and
prove compliance with voluntary standards stand to lose business
revenue as most commercial contracts mitigate risk by demanding
proof of compliance with relevant schemes according to the sector,
region, and types of data involved in the business activity.

13 See the OWASP Vulnerability Management Guide.


14 Information on the PCI Security Standards Council can be found at https://oreil.ly/
iP191.

6 | Chapter 1: Protecting Data and the Places Where Data Resides


So does compliance guarantee security? Unfortunately, it does not. It
checks off some minimal controls that cover people, processes, and
technology. Though compliance and regulation go a long way in
helping organizations have a basic security layer, this is not the same
as an actual security program. Going back to understanding that
control presence does not mean the control is effective, compliance
schemes often require checking off the presence of a control,
without further investigating its effectivity results or whether it is
working at all. Let’s look at a very common example of how compli‐
ance does not translate into better security.

Compliant Passwords, Ineffective Preventive Control


From requiring a minimum password length to specific types of
characters, security guidelines have been trying for decades to ren‐
der passwords more effective against guessing and cracking. Alas,
these guidelines have not had much success, as passwords remain an
ineffective preventive control. The main reason for this is that
humans continue to pick bad passwords,15 but another reason is a
lack of testing how passwords perform. For example, before 2020,
the American-based National Institute of Standards and Technology
(NIST) issued password guidelines that recommended changing
passwords frequently while also ensuring passwords are longer to
make them harder to guess or crack. But while the policy was
adhered to, the control’s effectivity was often left unexamined;
meanwhile, its actual effectiveness was extremely low. As it turns
out, to comply with the request, people opted to choose the mini‐
mum password length possible so that they could remember their
frequently changed passwords. They also practiced riskier behaviors,
such as writing passwords down in proximity to their workstations,
and most admitted to reusing those passwords across different
accounts. A BitWarden report notes that it took hackers less than
one second to guess notoriously weak passwords.16 After examining
the inefficacy of the original guidelines, the NIST changed its pass‐
word security guidelines and no longer recommended prescheduled
password changes. Instead, organizations were advised to screen

15 Liam Tung, “We’re Still Making Terrible Choices with Passwords, Even Though We
Know Better”, ZD Net, September 24, 2021.
16 Natasha Piñon, “Hackers Guessed the World’s Most Common Password in Under 1
Second—Make Sure Yours Isn’t on the List”, CNBC, November 23, 2022.

We Are Compliant, So Why Are We Not Secure? | 7


passwords against a list of passwords that are known to be compro‐
mised and to change passwords accordingly.17
Can one now assume password security? Unfortunately, complying
with this updated best practice is better, but it is still not a way to
fully meet real-world password security needs. A more effective way
to mitigate password risk is to view it within a wider access control
context. For example, advise the use of password managers to create
strong, unique passwords for each platform and website used. Roll
out multifactor authentication to limit the ease of use of stolen pass‐
words. Once that is done, password effectivity can be retested, and
the results can be documented to justify the cost of the controls and
their ability to mitigate risk.
We can learn from the password example that the mere compliance
with official guidelines for passwords does not guarantee security.
Adhering to standards and complying with regulations are
extremely important steps in a global effort to standardize security,
but these efforts are at a minimum baseline level. These one-size-
fits-all requirements are not, on their own, a way to build security
programs for specific security architectures, use cases, and imple‐
mentations. To reach a better security posture and to mature organi‐
zational capabilities in this regard, one must continually run a risk
assessment and mitigation loop that includes measurable control
test results.

The Need to Justify Security Spend


The last five years have seen double-digit year-over-year growth in
cybersecurity spending across the globe. Gartner notes that cyberse‐
curity spending is on pace to surpass USD $260 billion by 2026.18
But while budgets keep growing, organizations don’t have much to
show for it in terms of reducing the rate of successful attacks. It
stands to reason that executives and boards have started to question
costs allocated to security and to ask to see proof of the security
program’s effectiveness. CEOs and CFOs are asking chief informa‐
tion security officers (CISOs) and chief information officers (CIOs)

17 Have I Been Pwned?.


18 Matt Kapko, “Cybersecurity Spending on Pace to Surpass $260B by 2026”, Cybersecur‐
ity Dive, October 18, 2022.

8 | Chapter 1: Protecting Data and the Places Where Data Resides


to justify the rising budgets associated with buying and implement‐
ing security controls, managed security services, and security train‐
ing, to name a few. The obvious result is that budgets are going to be
cut back, and at that point, only the controls that show the best
return on investment and proven performance will be prioritized.
Another very problematic result is that CISOs and senior security
management themselves are seen as delivering suboptimal results,
leading to blame and premature attrition that further increases busi‐
ness risk.
To start moving in the right direction, security must enable decision
makers to prioritize investments by relying on repeatable testing,
data insights, and measurable results to move forward. While an
official framework has not yet been standardized for evidence-based
security, the concept itself is not new. Metrics, testing, and fine-
tuning are methods already used in other industries such as software
engineering, as well as the automotive industry, the pharmaceutical
sector, and even performance sports, to name a few.
As a case in point for cybersecurity, approaching standardization as
much as possible would mean applying a threat-informed approach,
relying on a common language of attacker TTPs, and creating met‐
rics specific to security controls. Using common metrics can help
test the controls, analyze results, and tune and test improvement
over time. As needed, security teams can advise to modify controls,
change them, replace them with architectural modifications, or
remove them entirely if they continue to prove inefficient. This evo‐
lution, in turn, will directly affect the ability to justify spending on
security controls, training, and new projects, and it will also demon‐
strate the success of security leadership to effectively reduce risks to
the business.

The Need to Justify Security Spend | 9


CHAPTER 2
The Evidence-Based
Security Framework

Embarking on a journey to change the approach to measuring suc‐


cess, information security teams can begin using an evidence-based
framework. To begin, and to prove its benefit, one can start by test‐
ing some of the critical controls within the security architecture and
measuring how effectively they work against the most relevant
(high-risk) adversary TTPs.1

Speaking a Common Language


While the most relevant adversarial TTPs can be different for each
organization, as one chooses the attack types to focus on, a good way
to explore TTPs is by working with the ATT&CK knowledge base,
which is freely accessible and community supported. This knowl‐
edge base is also constantly updated by information security profes‐
sionals as they discover new threats, new TTPs, and emerging attack
vectors. What keeps the ATT&CK TTP catalog very current is that
the TTP discovery process is facilitated by MITRE’s Collaborative
Research into Threats (or CRITS), an open source tool that helps
researchers and security professionals collect and archive attack arti‐
facts and submit their findings for the use of others in the commu‐
nity. To help defenders find what they need on the ATT&CK

1 For information on CIS Critical Security Controls, see https://oreil.ly/yVhB6.

11
knowledge base, attacker TTPs are categorized by attacker objec‐
tives, making it easier to look for relevant TTPs. Each is also num‐
bered so that defenders can refer to the same item in their proactive
plans and response activities. There are 14 areas (columns) for tac‐
tics, and under each tactic are a variety of techniques and subtechni‐
ques defenders can explore. At the technique level, defenders can
also find a detect/mitigate section that can help cut through to effec‐
tive control measures specific to each attacker technique.

The Six Steps of the Evidence-Based


Security Framework
Once familiar with a common language to use across all testing,
security teams can rely on it as they work with an evidence-based
security framework to structure testing, analyze results, and repeat
tests to measure improvement and success. In the framework below
(Table 2-1), there are six primary steps to follow. Here, the threat
TTPs to be measured against are mapped, then metrics can be
developed, controls tested and retested, and results analyzed to pro‐
duce insights to help enhance security over time.

Table 2-1. The six steps of the evidence-based security framework


Step Description
1. Map threats Identify which threats pose the biggest risks to your organization. Then use the
to TTPs ATT&CK knowledge base to map the TTPs associated with those threats, whether
they be threat actors, threat vectors, attack types, etc.
2. Devise Develop metrics:
metrics
• To quantify the impact each threat could have on your organization
• To quantify the efficacy of your controls in combatting the threat
• To quantify the efficacy of your incident-response process with regard to the
threat
• To create a roadmap for improvement over time
3. Simulate Develop a way to simulate the threat so the metrics created above can be assessed
and recorded before the organization is exposed to a real cyberattack.

Simulate the threat’s effect on your infrastructure and networks. This allows you to
collect the metrics you developed to get real-world insight into the damage a
particular threat could do and to evaluate the efficacy of the applicable controls.

To test complex attack scenarios, a breach and attack simulation (BAS) tool can be
considered.
4. Analyze Analyze the data to identify any deficiencies and possible changes that could be
implemented to improve security.

12 | Chapter 2: The Evidence-Based Security Framework


Step Description
5. Remediate Remediate the deficiencies and put security control improvements into place.
6. Repeat Repeat the testing periodically to demonstrate that the changes quantifiably
improve control effectiveness and overall security posture.

While this is not an official framework yet, it is a straightforward


way to create a feedback loop to test control effectivity. In the fol‐
lowing sections, we will use examples to go through each step in the
framework and view a hands-on approach to using it.

How to Map Threats to TTPs


Before we jump to using the evidence-based security framework,
let’s view an example of how to map threats to specific tactics, then
drill down to different techniques, subtechniques, and procedures,
as well as mitigations. We will do this as an initial step to focus on
the adversarial tactics to test against. The example selected is one
used by attackers in most incidents: privilege escalation. As a first
step, we will locate this tactic (TA0004) in the ATT&CK knowledge
base and drill down to various techniques associated with privilege
escalation.2 In this example, we will assume the attacker chose a
Domain Policy modification to give them more control over the
Active Directory environment and to allow themselves to carry out
malicious activities by lowering the organization’s security settings.
Looking at the corresponding ATT&CK technique T1484 (Domain
Policy modification) and subtechnique T1484.001 (Group Policy
modification), one would have to consider that to make modifica‐
tions to Active Directory, attackers would have already obtained
more elevated privileges. With domain admin, they can use the
access to spread malware, for example, and to disable/modify enter‐
prise security settings. To test their controls facing this type of mali‐
cious activity, defenders will have to look at the relevant controls
they have in place or wish to enforce/obtain. They can use
ATT&CK’s suggestions for detection and mitigation, as shown in
Table 2-2.

2 View the catalog of enterprise tactics. For information on tactic TA0004, privilege esca‐
lation, see https://oreil.ly/4QDRj.

The Six Steps of the Evidence-Based Security Framework | 13


Table 2-2. MITRE ATT&CK, technique T1484a
Action type Code Description
Mitigation M1026—Privileged Use least privilege and protect administrative
(example) account management access to the domain controller and Active
Directory.
Detection (example) DS0026—Active Directory Monitor for newly constructed Active Directory
object creation objects, such as Windows EID 5137.
Detection (example) DS0026—Active Directory Monitor for unexpected deletion of an Active
object deletion Directory object, such as Windows EID 5141.
Detection (example) DS0026—Active Directory Monitor for changes made to Active Directory
object modification settings for unexpected modifications to user
accounts, such as deletions or potentially
malicious changes to user attributes
(credentials, status, etc.).
a Source for this information is at https://oreil.ly/XAyoN.

Group Policy Objects (GPOs) help security teams govern security set‐
tings and manage the user population’s permissions in a centralized
and standardized way. As such, administrative control over group
policies enforces them across the entire domain, which directly
impacts organizational security levels. Using least privilege here is
foundational—providing this type of access to as few people as possi‐
ble. Admin accounts should be configured for security and isolation,
then they should be monitored and their activity logged (including
enabling PowerShell auditing via Registry or GPO). All logs can be
consumed by a security information and event management (SIEM)
solution, and alerts can be created for suspicious events.
The instructions presented for each identified technique are often spe‐
cific enough to enable security professionals to improve control
efficacy.
This initial step of mapping threats to known TTPs is the first and pre‐
paratory step in working with the evidence-based security framework.
Next, we will go through the entire framework with another example.

Using the Evidence-Based Security Framework


to Measure Control Efficiency
In this section we review the six steps of the evidence-based security
framework using the real-world example of bypassing egress filter‐
ing controls.

14 | Chapter 2: The Evidence-Based Security Framework


Almost every attacker will attempt to exfiltrate data from a compro‐
mised organization once they have breached it and can roam
through the network. While attackers can try different ways to move
data out of the compromised organization, one common method is
a command-and-control (C2) architecture. To do that, and achieve
additional control on compromised devices, attackers would be
looking for ways to ensure that the compromised network resource
is allowed to communicate with their malicious C2 server. One tech‐
nique threat actors use to that effect is bypassing egress filtering con‐
trols (egress is the traffic leaving the network, including emails, files,
uploads to clouds, etc.), which is the example we will work with
using the evidence-based security framework.
In this example, attackers aim to have their C2 traffic accepted as
legitimate by circumventing security controls like intrusion detec‐
tion or firewalls that feature egress filtering/deep packet inspection
capabilities. Going into the evidence-based security framework, we
will work with this example through all six steps. Our first step is to
map egress filtering bypass TTPs in terms of tactics, techniques, and
applicable procedures. The example is designed to be simplistic to
allow anyone to walk through the evidence-based process with read‐
ily available security tools.
One important note here is that this is an example of just one step in
the overall attack. We will get to more complex attacks later.

Step 1: Map threats to TTPs


We begin by mapping the egress filtering bypass objective to the
common language of the ATT&CK knowledge base. We will start by
picking the command-and-control tactic (TA0011) from the tactics
list,3 and then we drill down to the techniques, locating the dynamic
resolution technique (T1568).4 Next (Table 2-3), we will select sub‐
technique T1568.003—DNS calculation.5

3 For information on command-and-control ATT&CK tactic TA0011, see https://oreil.ly/


p8yUD.
4 For information on the dynamic resolution ATT&CK technique, see https://oreil.ly/
JNAr9.
5 For information on the DNS calculation ATT&CK subtechnique, see https://oreil.ly/
9MN4X.

Using the Evidence-Based Security Framework to Measure Control Efficiency | 15


Table 2-3. MITRE ATT&CK, tactic TA0011
TTP MITRE ATT&CK Description
ID
Command Tactic TA0011 The adversary is trying to communicate with compromised
and control systems to control them.
Dynamic Technique T1568 Adversaries may dynamically establish connections to
resolution command-and-control infrastructure to evade common
detections and remediations.
DNS Subtechnique Adversaries may perform calculations on addresses returned in
calculation T1568.003 DNS results to determine which port and IP address to use for
command and control rather than relying on a predetermined
port number or the actual returned IP address.

An IP and/or port number calculation can be used to bypass


egress filtering on a C2 channel.

The threat here is that attackers will succeed in communicating with


devices on the compromised network and can exfiltrate data for
extended periods of time before being discovered. This can be
achieved by making an “educated guess” as to which port would be
allowed to send outbound traffic on a network where egress filtering
is enforced. If they get it right, attackers can instruct their malware
to connect to the C2 through that permissive port without adding
noise to network logs. If other ports are also open and allow out‐
bound traffic, and the security team is unaware of those, those ports
could be allowing malicious servers to easily connect with the net‐
work and for data to leave the network. It is common to read about
breaches where the exfiltration period went on for months, or even
years,6 before it was detected.
The result from step 1 is that we would like to ensure that the secu‐
rity team is aware of all open ports, and where a port does allow out‐
bound traffic, it is monitored for traffic patterns and protected by
packet inspection controls at a minimum.

Step 2: Devise metrics


The second step in the framework is the addition of metrics that would
help measure against the goal. We want to limit the ability of attackers
to connect via rogue C2 servers by bypassing egress filtering. Metrics

6 Jai Vijayan, “Attackers Were on Network for 2 Years, News Corp Says”, Dark Reading,
February 27, 2023.

16 | Chapter 2: The Evidence-Based Security Framework


one could consider in this case are the number of policy violations—for
example, denied firewall traffic events—the number of protocol viola‐
tions, or the number of NXDomain (non-existent domain) messages.
Another simple metric can be the number of open ports discovered via
automated scans performed by perimeter devices.

Step 3: Simulate
The third step entails simulating an attack to measure how the
selected controls detect the TTPs we are testing against. For this
example, one way to check whether egress filtering is adequate is to
test for ports that allow outbound traffic. An easy way to do that is
to use Nmap, a free network scanner. This quick test does not
replace fully auditing firewall rules, but it can be a fast way to know
if the filtering is not effective.
We would begin by initiating an Nmap port scan on all TCP ports
(65,535 in total) from an internal host to an internet host.7 One could
test UDP ports as well, but the TCP scan is where we begin. We can
run Nmap against allports.exposed, which is a free service that will
respond on all 65,000+ ports, and thus any port that generates a
response to the scan is one that remains unfiltered by the firewall.

Step 4: Analyze
Using the results of recurring scans, one can enumerate and identify
overlooked open ports, then take action to close them, initiate strin‐
gent monitoring, and/or protect them with additional controls, fil‐
tering rules/policies, etc.

Step 5: Remediate
This step is about remediation and reducing risk. Here we would take
concrete actions to ensure ports that allow egress traffic are moni‐
tored, and those not intended to allow outbound traffic are closed or
otherwise protected. If the firewall is blocking traffic from/to impor‐
tant ports, that issue can also be remediated after the scan.
For organizations that use next-gen firewalls, it would be easy to
enforce application types and protocols that would be allowed to

7 Nmap is one way to scan for open ports. One can also opt to use PowerShell to do the
same.

Using the Evidence-Based Security Framework to Measure Control Efficiency | 17


communicate on specific ports, and to ensure that DNS traffic on
port 53 features the right type of traffic (query-response only) and
that communication flows on port 443 begin the handshake process
with an exchange of a TLS certificate, to name a few.
If this was a more complex situation necessitating investment in new
controls, addressing the risk could be backed by the results obtained
from testing, and a roadmap would be drawn up and approved to
close the identified gaps.

Step 6: Repeat
After remediation actions are taken, attack simulation can be
repeated to generate another results report and compare it to the
previous one. In our example, testing and retesting will give a better
indication as to the efficacy of egress filtering than just having it in
place without cyclical testing. Keep in mind that in most cases, com‐
panies do have perimeter controls (application-level controls) in
place that can detect open ports and include that information in on-
demand reports.
Big picture: using the evidence-based security framework to increase
control effectivity can produce more certainty as to the actual secu‐
rity posture of the organization. When ports are regularly scanned
and are monitored properly, with metrics covering violation events
and patterns, risks are either known or mitigated, and the overall
risk to the organization is lower.
Working with this approach in mind can incrementally lower risk
over time, but it is quite granular to look at a small number of TTPs
each time. What happens when one must test elaborate attack flows
that adversaries often deploy? The answer is automation by breach
and attack simulation (BAS) platforms and tools, a term coined by
Gartner.8

8 “Breach and Attack Simulation (BAS) Tools Reviews and Ratings”, Gartner, accessed
April 17, 2023.

18 | Chapter 2: The Evidence-Based Security Framework


CHAPTER 3
Breach and Attack Simulation

In the previous chapter, we covered examples where control effectiv‐


ity was tested in very specific threat actor TTPs, allowing defenders
to diagnose poorly performing controls and to keep improving
results over time. But attackers never use only one technique during
the attack’s lifecycle. Rather, they strategize the breach, use different
techniques to gain the initial foothold, and then use others to esca‐
late privileges and pivot, working with different tools and living-off-
the-land techniques to achieve their objectives. Thankfully,
defenders don’t have to test for each variety and attack kill chain one
tactic or one technique at a time. To simulate more complex attacks
in step 3 of the evidence-based security framework, security teams
can opt to use automated breach and attack simulation (BAS) plat‐
forms/tools. Using BAS helps test how specific controls react and
can also test the environments’ resilience to a given threat vector.
With BAS, the output from each simulation is detailed in custom‐
ized reports that help quantify the extent to which controls detect a
wide range of TTPs. The results help focus and prioritize remedia‐
tion and ultimately serve to close security gaps. Security teams who
regularly test their ability to detect and remediate incidents can also
use BAS platforms to test attack flows uncovered by external or
internal threat intelligence. They can then automate testing and
reporting on what those attack patterns look like in their own enter‐
prise infrastructure by observing the response from security con‐
trols. This in turn makes outputs highly relevant and repeatable, and
it provides clarity as to risk mitigation of the most relevant threats
and improvement over time. Simulation results can be shared across

19
other parts of the organization to accelerate boosting defenses and
reducing risk.

The Advantages of BAS


BAS tools allow security professionals to enjoy a number of
advantages.

Continuous Validation
One of the biggest advantages of using BAS tools is the ability to run
tests repeatedly for continuous validation. This capability allows
security teams to harden their environments and extend their ability
to manage security across complex infrastructures continually and
efficiently. Continuous validation also means access to data-backed
insight into the true state of the organization’s security program and
measurable performance of security controls. Looking at the bigger
picture of the security program from the viewpoint of the executive
suite, on-demand validation ties back to the circular nature of
evidence-based security. By providing data on the state of controls
and their ability to detect and prevent attacks, security leaders can
prove the efficacy of their risk mitigation strategy and justify present
and future security investments.

Saving Costs with BAS


Organizations that work to mature their security posture often scope
penetration tests to look for gaps in their defenses before an attacker
finds them. Those with fair-sized security teams and budgets may
even employ Red Team members internally.1 Red teamers are offen‐
sive security specialists that deploy attack tactics to test how defenders
and controls would react to malicious activity. But whether red team‐
ers are internal or outsourced, the expense to conduct these tests can
add up or even be entirely outside the budget for most organizations.
This is where BAS platforms can help automate the activity, repeat‐
edly and methodically, at a fraction of the cost.
Here is a summary of the main advantages to using an automated
system to conduct attack simulations:

1 For the NIST definition of Red Team, see https://oreil.ly/JrK0c.

20 | Chapter 3: Breach and Attack Simulation


• Like any security activity, automation saves time and money and
ultimately boosts security.
• It is safer, in general, to use an automated tool to simulate an
attack than it is to manually attack live systems and production
environments.
• Automated systems can create consistent reporting that’s easier
to quantify and analyze in repeated testing. Those reports can
be customized to the organization’s needs and allow security
staff to use a common template to work on outputs in unison.
• Experienced security professionals can pivot to activities that
cannot be automated and that require their expertise instead of
conducting repeated tests and documenting them. In the long
run, this leads to financial savings for the organization but also
better job satisfaction and less burnout for security employees.

The Three Stages of BAS


BAS mimics the behavior and pathways of attackers on the network
and attempts to get through controls in an automated flow of events.
The three stages of using BAS to conduct a simulation are (see
Figure 3-1):

• Choosing attacker behavior/attack flows to test, preferably using


ATT&CK to base choices on a repeatable, up-to-date common
language
• Executing the attack flow on the BAS platform
• Observing the response and analyzing the output

Figure 3-1. Breach and attack simulation (BAS)—three main steps

The Three Stages of BAS | 21


Within the overall scheme of the evidence-based security frame‐
work, BAS tools come in at step 3 of the evidence-based security
framework. They provide the automated and repeatable aspects of a
scientific approach to testing controls. Remember, steps 4–6 entail
analyzing test results to come away with a roadmap for remediation,
retesting, and improvement. Covering the six steps of the evidence-
based security framework in Chapter 2, we looked at a rather sim‐
plistic scenario of checking for inadvertently open ports. Let’s look
at a more complex and realistic example from an actual test that
took place on a hospital network. This test resulted in meaningful
changes to the security architecture, moving to a zero-trust model,
to address actual risk levels that exceeded the established risk
appetite.

Cutting Back on Risk with Zero Trust:


A Case Study
In this case study, a large hospital planned to conduct a test to mea‐
sure control efficacy in case of a widespread malware outbreak that
could paralyze its operations. The team chose a benign virus to use
across its environment and simulate how and where it would spread
and at what speed. The assumptions going into the simulation were
that the hospital had adequate controls in place and that its network
was highly segmented to prevent malware from spreading too far
into the overall infrastructure. Strengthening its assumption, the
hospital already knew that its security team far exceeded regulatory
compliance requirements. The simulation’s objective was to reveal,
in an evidence-based manner, which controls worked as expected,
which were not performing, and what could be done to remediate
potential issues the security team might discover.
The benign virus was installed and started to run, and it quickly
became clear that the segmentation was the most effective control in
place to limit the malware’s spread. The more meaningful result of
this test revealed that the segmentation worked on the departmental
level, but the risk level of losing an entire department to a ransom‐
ware attack, for example, exceeded the hospital’s risk appetite. The
evidence gained from the simulation enabled the security team to
rethink its security architecture. The team came to the conclusion
that the best way to reduce the risk to acceptable levels would be to
implement a zero-trust architecture rather than the network

22 | Chapter 3: Breach and Attack Simulation


segmentation it had in place. With zero-trust, the concept of seg‐
mentation becomes a lot more granular (aka microsegmentation).
The key element is always authenticating and authorizing access to
data and assets based on contextual factors for each access request
and enforcing least privilege to provide access that’s limited to what’s
needed at the time.
With the results of the test run at the hospital, their security leader‐
ship started on a zero-trust strategy by inventorying all assets, classi‐
fying them, mapping their communications and traffic flows, and
then microsegmenting each asset according to a specific risk assess‐
ment. Data and resources were only accessible when expressly
needed, and least privilege principles became easier to apply, as were
other security policies. Zero trust complements the defense in depth
strategy that should still be applied to layer controls across the secu‐
rity architecture.

Testing for Risk: An Example


Let’s consider another example, this time using testing as a way to
assess risk and address it in an evidence-based, measurable way. A
public-facing server is exposed to an existing and unpatched vulner‐
ability. This server further lacks proper segmentation in the demili‐
tarized zone (DMZ) and isn’t protected by any other compensating
controls. What would be the bigger risk in such a case? Would it be
the missing patch, or would the bigger issues be the lack of segmen‐
tation or compensating controls? While patching is a best practice
and should take place as soon as patches are available, the constant
emergence of new zero days, and variations on older ones, is impos‐
sible to patch against. In 2022, a record 26,448 software security
flaws were reported, with the number of critical vulnerabilities up
59% on 2021 statistics.2 The more effective control is to plan for
containment of a potential compromise by segmenting and applying
additional/compensating controls. Some examples are egress filter‐
ing on the firewall, operating system hardening, privileged access
management, ensuring effective monitoring and logging, limiting
user access on that server, and using enhanced access controls
(MFA).

2 “We Analysed 90,000+ Software Vulnerabilities: Here’s What We Learned”, The Stack,
January 9, 2023.

Testing for Risk: An Example | 23


The goal is to harden access around assets and test the security in a
measurable way rather than to rely on the sheer usage of tools to
assume protection. Unfortunately, in the aftermath of breaches, it
often is revealed that while organizations did run endpoint security
solutions, or other security controls on their networked devices,
those controls failed to prevent compromise. It is worth noting that
attackers do this kind of evidence-based testing when they target
networks and systems. They almost constantly test targets to figure
out how to bypass security tools and controls, and as a result they
more effectively know the limits of security tooling than defenders
do. Evidence-based approaches allow defenders to level the playing
field and to discover and compensate for limits they are probably
blind to today.
Following the principles of defense in depth, we already know that
no tool is infallible on its own, and we must compensate for the pos‐
sibility of partial or complete failure. It is by favoring evidence-based
approaches that we can diagnose the problem with certainty and
work toward effective security in the short and longer terms.

24 | Chapter 3: Breach and Attack Simulation


CHAPTER 4
Evidence-Based Security and
Metrics That Matter

Famed management consultant Peter Drucker is often quoted for


saying, “You can’t manage what you can’t measure,” and he is not
wrong. Using metrics helps organizations measure performance,
define new goals, and continually improve. Metrics are everywhere
around us, and they are just as essential to the information security
program as they are to other parts of the organization. Within the
security department, metrics are the evidence that allows teams to
connect with overall business goals, assessing how well the team
reduces security and business risk.

Vanity Metrics Versus Actionable Metrics


Most metrics can provide some value, and having a good selection
of metrics is very helpful. That said, security teams have a tendency
to focus on metrics that hold little value in terms of actionable con‐
text. Those are known as vanity metrics. Vanity metrics are the sort
of data or statistics that would sound interesting or valuable but in
reality do not provide any context or tangible information to help
understand, repeat, or improve performance. Consequently, vanity
metrics don’t translate into meaningful business results. A good
example for a vanity metric would be the number of people who
accessed a website. While on its own a large number of visitors can
look impressive, it does not provide context around the engagement
of each visitor, including how they got to that site (was it junk

25
traffic?) and why a visitor would visit today but never return to that
website. In the security department, the same goes for metrics like
number of alerts logged or number of spam emails caught in the fil‐
ter. These metrics are easier to generate with security tools, but they
don’t provide contextual value to act upon, and the lack of value can
translate to a security blind spot.
Let’s look at one way where common vanity metrics can be damag‐
ing to organizational security with an example of alerts generated by
a security information and event management (SIEM) solution.
Vanity metrics from the SIEM would involve looking at the total
number of alerts, then creating a funnel view to show that most were
linked to one type of violation. A third metric would show how
many issues were left at the end of each day or week, which could
appear like most problems were solved. While these metrics show
that the team is working on alerts, they do not give risk context.
How many alerts ended up being tied to an incident? What was the
severity of the issues the team found? How quickly and how well did
the team remediate them, and how do the results impact overall risk
levels? Vanity metrics may provide some limited value but lack the
granularity that would help make them actionable. Choosing better
metrics can help translate them into actions and then better results
for the security program.

KPI and KRI Metrics


Key performance indicators (KPIs) and key risk indicators (KRIs)
are staples in the cybersecurity industry. KPIs are typically the driv‐
ing metrics that link to the overall strategy, helping security teams
and management measure their success in achieving strategic goals,
and KRIs help keep tabs on rising risk levels. But while both are
commonly used across risk assessments, plans, and operations, even
very popular KPIs don’t always provide context that can lead to spe‐
cific action. For example, many security teams and Security Opera‐
tions Centers (SOC) use the KPI mean time to detect (MTTD) to
document how long it takes them to find out about an incident or
intrusion. But while MTTD can indicate to the security team that it
is very slow to detect issues and show it is not improving over time,
the team won’t be able to know what it’s doing wrong and how to
effectively course correct.

26 | Chapter 4: Evidence-Based Security and Metrics That Matter


Another example of a KPI in the SOC context would be to look at
the number of successful intrusions per quarter. While this number
can give the CISO/C-suite a general indication as to the resiliency
level of the organization, it does not provide more granular context
as to what was breached and how that happened, including connect‐
ing the outcome with specific TTPs. If the team knew that it has
repeated breaches by automated exploitation on a web server
because it runs a vulnerable Windows 2019 version, it could halt the
way attackers are breaching their web server by using a more ade‐
quate control. That said, there is a large variety of KPIs, including
those noted above, that do have their place in the overall scheme of
the security program as they allow CISOs to see the bigger picture of
their security posture and point out areas that don’t perform well.
Alongside KPIs, KRIs are metrics that act as bridges from business
strategy to the organization’s evolving risk factors. KRIs help organi‐
zations estimate risk based on specific business activities and are
most often employed by a security operations manager or the organ‐
ization’s senior management. When a KRI starts getting riskier, it
can impact the achievement of a security/business KPI, but it, too, is
not always actionable. For example, as part of the business strategy,
the organization aims to minimize reputational and regulatory
impact from data exposure. The risk identified is a data breach
attack. A possible KRI in this case could be the number of failed
admin accesses to a customer database. Yes, there is more risk here,
but there is a “why” that needs to get discovered in order to arrive at
the right “how.” One example is failed admin access due to password
sharing with another admin. This would have to be resolved imme‐
diately, stopping password sharing, changing passwords, and fortify‐
ing admin logins with multifactor authentication. Just knowing that
risk is rising is not sufficient for specific follow-up action.
Measuring the efficacy of security programs and their ability to
reduce risk is becoming more and more complicated over time due
to the sheer amount of data-related security events. It’s easy to col‐
lect a variety of metrics from security tools but harder to make sense
of the data to lead meaningful progress. Security leaders and their
teams need to use actionable metrics to change that, hinging on data
they can use but that also speaks the language of business leaders
who are looking for time and money results.

KPI and KRI Metrics | 27


Focusing on Actionable Metrics with the
Evidence-Based Security Framework
Evidence-based security favors actionable metrics that provide context
about the “what” and “how” around them. It uses quantifiable meas‐
urements to understand how well controls reduce the risk of attacks
and can translate it into monetary terms for nonsecurity business
leaders. Using the evidence-based security framework can help con‐
nect more dots by measuring control efficacy versus attacker TTPs
relevant to the organization’s risk profile (see Figure 4-1).

Figure 4-1. Focusing on actionable metrics using the evidence-based


security framework

For instance, if the risk is ransomware attacks in the industrial sec‐


tor, and the top attack flows include planting malware on vulnerable
devices to stage the attack, then defenders can map the relevant
TTPs to controls they have in place or identify those they lack and
measure their ability to thwart infection attempts. Let’s assume there
are 10 different TTPs relevant to the most common attack flow to
infect vulnerable devices with ransomware. The organization can
thwart 5 of them, thereby reducing the attack surface by X% as well
as the risk of that attack materializing. This is obviously simplistic as

28 | Chapter 4: Evidence-Based Security and Metrics That Matter


each TTP can carry a different weight. For example, a remote code
execution is a lot riskier than other issues, and without addressing it,
the risk won’t drop considerably. The cost of controls can thus be
justified by a risk, probability, and impact calculation, and if the
organization’s risk appetite is exceeded, it can opt to invest in addi‐
tional or better controls to treat the residual risk or make other data-
informed decisions for the business.
If we revisit the MTTD example, using evidence-based security, we
can now know that:

• MTTD is too long because out of the 10 most impactful TTPs, we


don’t detect the 5 leftmost of the boom event (for example,
ATT&CK technique T1059 and detections DS0009 and DS0017,
as shown in Table 4-1). Borrowed from military terminology, a
“boom” event in the context of cyberattacks refers to detrimental
critical events that take place during the attack lifecycle.1

Table 4-1. MITRE ATT&CK, technique T1059


ATT&CK TTP Description
Technique T1059—command and scripting Adversaries may abuse command and script
interpreter interpreters to execute commands, scripts, or
binaries. In this case, PowerShell abuse.
Detection DS0009—process creation Monitor command-line arguments for script
execution and subsequent behavior.
Detection DS0017—command execution Monitor log files for process execution through
command-line and scripting activities.

• Because we don’t detect those early TTPs, we don’t find out about
the intrusion until way later in the attack, which is ineffective.
• We are not improving on the MTTD because we don’t have the
right controls in place.
• If we enhanced our identity access controls, added segmenta‐
tion, and added three new PowerShell rules to our endpoint
security tool, we would be able to detect those initial foothold
attempts earlier in the attack flow.

1 IBM Institute for Business Value, Beyond the Boom: Improving Decision Making in a Secu‐
rity Crisis, January 2018.

Focusing on Actionable Metrics with the Evidence-Based Security Framework | 29


Fixing these detection gaps is more actionable and a better way to
improve security.
We can also go back to our SIEM metric of “number of alerts gener‐
ated.” More actionable metrics would provide the SOC with meas‐
urements it can use both internally and with business leaders.
Example: On Monday, the security team reports it got 50 SIEM
alerts: 3 were high severity, so the team triaged those within the first
30 minutes:

• One alert from the SIEM showed deletion of shadow copies on


a shared server (TTP); it translated into a potential ransomware
incident (ATT&CK Technique T1490).
• The team investigated and was able to find out that the attacker
was using external remote service (ATT&CK technique T1133)
and blocked access to it within one hour from triage (ATT&CK
mitigation M1042).
• By doing that, the team blocked 4 of 10 attacker TTPs related to
that attack flow and reduced the likelihood of that attack flow
materializing by 40%.
• If the team thereby thwarted a ransomware attack, the security
team potentially saved the company USD $4.54 million (global
average cost of a ransomware attack in 2022).2

What executives are going to look for is how much did it cost us to
use the security controls related to this effort, what did it cost for us
to diagnose and contain this incident, and what cost did we avoid.
More often than not, that equation must be in favor of the security
team, showing that the cost to prevent attacks continues to justify
itself.

The Metrics View from the C-Suite


On average in 2022, enterprises spent 9.9% of their IT budgets on
cybersecurity.3 To continue to justify budgets and security invest‐
ments, security leaders rely on their peers and board members for

2 IBM Security, “Cost of a Data Breach: Report 2022”, 2022.


3 Louis Columbus, Benchmarking Your Cybersecurity Budget in 2023, VentureBeat, Febru‐
ary 16, 2023.

30 | Chapter 4: Evidence-Based Security and Metrics That Matter


executive buy-in and project funding. Metrics from the security
organization delivered in the boardroom are essential for achieving
that goal but also for steering a security-aware organizational cul‐
ture, garnering support for decisions that treat business risk, and
evaluating the performance of security leadership itself. Beyond
senior management, security metrics are required by regulators,
auditors, insurers, and business partners. All these stakeholders are
interested in seeing metrics that show that security programs trans‐
late into reduced risk and better overall resilience of the business.
With that in mind, the security organizations must provide the right
statistics and metrics to the executive team to create visibility and
shared accountability and to foster trust. To do that, metrics have to
be selected to reflect meaningful measurements that quickly allow
executives to understand the situation and act on it, including stra‐
tegic objective metrics, control cost/benefit metrics, and the mitiga‐
tion of business risk. These metrics should answer the overarching
question: are we taking the right actions to move us toward our stra‐
tegic business goals? To that effect, sharing vanity metrics with the
C-suite can be to the security team’s detriment. For example, if met‐
rics show that there are very few intrusions per month, instead of
understanding that security is doing a great job at mitigating risk, it
may seem like the company is hardly a target. A more effective way
to present metrics is to connect them to the actual external and
internal threats the company is facing and show how the security
team mitigates attack TTPs with measurable control efficiency.
Showing how the security team improves over time, and adding new
metrics as threats and controls evolve, drives the point home for
executives. It shows them that budgets are being spent in a way that
creates positive impact.
Finally, metrics are only one part of a story, and they have to be con‐
nected on one end to the business’s strategy and on the other to the
way security evolves with the organization to continue delivering
value. The evidence-based security approach helps connect with
leadership in a way that any business function can relate to and
helps support the security function as a business and innovation
enabler and not the mere cost of doing business.

The Metrics View from the C-Suite | 31


Conclusion: Moving Toward
Measurably Better Security

Organizations nowadays understand the fact that cyberattacks target


all types of organizations of every size and geography and that they
need to protect themselves effectively. Organizations that have gone
through a risk assessment program and drafted a business impact
analysis likely already know the potential impact of cyber-related
disruptions to their businesses; what they don’t always know is how
well they keep those risks at bay. What remains a challenge for
CISOs and C-suite peers is measuring how effective security pro‐
grams are in reducing risk and what their return is on investment.
In many cases, organizations that want to show proof of reducing
risk may take a checkbox approach to assessing their resiliency.
They rely on complying with regulatory requirements or compli‐
ance schemes that lay the baseline for being “secure enough” and do
very little more if they don’t “have to.” But this approach can only
work for so long, and it is only when those organizations are ulti‐
mately breached that investments in a proper security program
receive support and funding. But compliance is not a practical guide
to building a security strategy, and as such, it can never be adequate.
Moreover, threats vary between industries and keep evolving over
time, requiring organizations to also evolve the way they treat risk.
The better way forward is to move from compliance-driven risk
management to data-driven risk management.
Moving to data-driven risk management requires, well, data. And
obtaining this data has to fit the goal: assessing and validating the
security program’s effectiveness. The NIST recommends that

33
organizations undertake a continuous monitoring program (ISCM)1
to collect information readily available from security controls and
measure it against pre-established metrics. But while this is one step
in the right direction, it is not sufficient. Owning a security control
and seeing it operating within a pre-established range does not nec‐
essarily mean it will remain effective during an attack scenario.
Security cannot be assumed. To know how controls hold up against
adversarial tactics, they have to be tested in relevant scenarios, and
the results then become the data that’s most meaningful to validat‐
ing the security program’s effectiveness in reducing risk.
Addressing information security risk is addressing business risk,
and the effectiveness of the security program to keep risk within the
desired bounds should be expressed in metrics that serve the entire
organization. While some metrics are only applicable to the SOC or
the wider security team, additional metrics must be developed for
other stakeholders and upper management. Those will serve to
define the success of the security program in its role to mitigate the
right risks that best serve business goals.
At the end of the day, businesses exist to thrive in the face of change,
to grow and innovate. Information security must remain an enabler
of those goals and continue to showcase its worth and ROI through
measurably better security.

1 For NIST’s definition of ISCM, see https://oreil.ly/-x87r.

34 | Conclusion: Moving Toward Measurably Better Security


About the Authors
Christopher Frenz is the AVP of IT Security for Mount Sinai South
Nassau. Previously, he was the AVP of Information Security for
Interfaith Medical Center where, under his leadership, they became
one of the first hospitals to achieve a zero-trust network architec‐
ture. Frenz is a widely sought-after healthcare security expert and
his expertise has been widely featured in numerous publications and
conferences from around the world.
He serves as the Chair of the AEHIS Incident Response Committee
and is the leader of the OWASP Anti-Ransomware Guide and
OWASP Secure Medical Device Deployment Standard projects.
Frenz has won numerous industry awards for his security contribu‐
tions with the most recent being honored as a Healthcare Hero by
CHIME for his work on helping hospitals withstand the onslaught
of cyber attacks that occurred during the COVID pandemic.
Jonathan Reiber is Vice President for Cybersecurity Strategy and
Policy at the cybersecurity start-up AttackIQ, where he leads
research, communications, and strategy initiatives to help organiza‐
tions become more secure. Previously he held a senior fellowship at
Berkeley’s Center for Long-Term Cybersecurity and spent seven
years in the Obama administration as a speechwriter and strategist
in the Defense Department. A frequent public speaker about secu‐
rity and technology and extremism, his commentary has appeared
in TIME Magazine, Foreign Policy, Lawfare, The Atlantic Monthly,
Fortune Magazine, and Literary Hub, among others. He published a
monograph, “A Public, Private War,” about public-private coopera‐
tion for conflict in cyberspace, the findings of which were adopted
into national law.

You might also like