I asked a question on what I need to do to make my application secure, when somebody told me:
That depends on your threat model.
What is a threat model? How do I make a threat model for my application?
I asked a question on what I need to do to make my application secure, when somebody told me:
That depends on your threat model.
What is a threat model? How do I make a threat model for my application?
FilipedosSantos' answer does a great job of explaining a formal threat modelling exercise under, for example, the Microsoft STRIDE methodology.
Another great resource is the threat modeling course outline on executionByFork's github.
When I use the term "threat model" on this site, I usually mean something less formal. I generally use it as a response to new users asking "Is this secure?" as if "secure" is a yes/no property. It's usually part of a paragraph like this:
That depends on your threat model. "Secure" isn't a thing; secure against what? Your kid sister snooping on your iPhone? A foreign government soldering chips onto your datacentre equipment? Or something in between?
I really like the Electronic Frontier Foundation's threat modelling framework, which focuses on asking these three questions:
- What are you protecting?
- Who are you protecting it from?
- How many resources can you invest in protecting it?
I really like the way the EFF has written this because these simple and easy to answer questions can guide someone with zero background in security into figuring out "the right amount of security" for them.
A great definition can be found in this excerpt from the OWASP page about Threat Modelling:
A threat model is essentially a structured representation of all the information that affects the security of an application. In essence, it is a view of the application and its environment through security glasses.
How you make the Threat Model will depend solely on the Threat Modelling methodology applied. One of the most common methodologies used in the industry is Microsoft's, that is based on the STRIDE model of threats.
Usually a Threat Modelling workshop/session is a round table with all developers, product owner, security experts and a moderator (it can be done alone if you are not working on a team). The ones involved will execute the steps proposed by their methodology in order, and the result will be the Thread Model document/artifact.
One of the Microsoft Threat Modelling methodologies defines 5 major steps:
The company that I work for uses a similar methodology and it's required for all products that are under development. One difference that I find quite interesting it that we can either make a Threat Model for the entire product, or we can make Threat Models for each product use case.
In the end a Threat Model is the result of many Threat Modelling sessions, where the development team, PO and security experts will brainstorm to find possible vulnerabilities, and then use the defined methodology create the Threat Model document.
A threat model answers the question - what are the reasonably expected threats for the concrete software (or "system"). Emphasis on concrete (== not academic/theoretic) and reasonably (== not overbearing, also known as paranoid)
A paranoid threat model can (quite literally) paralyze everything (not limited to software). An academic/theoretic threat model can increase the cost of defense/mitigation to infinity.
A threat model is about the life and death of what you want to protect and what you have to handle vs. what your customer or "larger system" is expected to handle. Who do you trust or not and why? That "why" part is very important and the answer can't be "because". You are defining the boundary of responsibility.
Defense and mitigation plans are not part of the threat model. Mitigation is if something is not reasonably defensible or perceived threat is by and large nonsense or a fad (have been a few in the past few years - makes good headlines - latest by NSA))
Examples:
#1 Say you are writing a server for a military contractor to do FEM analysis for engines (or whole devices/vehicles). What is a reasonably expected threat? Denial of service and confidentiality. What is not? Spoofing, tampering, repudiation, elevation of privilege.
Why?
Authentication and authorization and (much stronger) crypto are handled by systems external to your software (you reasonably expect that to be handled by client's "environment" and normally it is). Breaking "integrity" is pointless (submitting broken mesh to analysis), repudiation you don't care about (someone submitted a "broken mesh" or a "mesh that's not really 'their engine'" and then denying it - between irrelevant and none of your business).
Denial of service can really hurt you (server not doing the work == no money) and is plausible (from proverbial; "Russians" to the competition from across the street to "general net attack from China" - has happened, will happen. damage is real). Confidentiality - you can't trust the cloud - not even .gov Azure, even if you are US company (someone will sell your wireframes to Lockheed), not to mention if your client is Chinese or Russian or German or Brit ... - you've got the picture
#2 Say you are writing/porting an accounting or banking software to "as service". What is a reasonably expected threat? Spoofing, tampering, repudiation. What's not? Denial of service. What's maybe? Elevation of privilege (depends on the nature of your software). What's complicated? Confidentiality.
Why? You have to go to the cloud (which will handle DoS) and confidentiality is the legal category for that line of business, protected (or not) by the legal system (defending against a "mole" that's going to blow a whistle on his girlfriend's CEO is none of your business). Your responsibility gets complicated because you are answering to contradicting demands. You need a lawyer.
On the other hand, non-repudiation is more or less the bread and butter of your business and happens frequently. You might be contractually or even legally required to enable excessive auditing. Tampering is related (someone proves that tampering is possible - non-repudiation is dead) and very lethal and attractive for an attacker (money. money, money). One can do tampering without breaking (usual) crypto - your "tampering" has many legs - now what?
Spoofing is not "authentication" - it's a 3rd party being able to record interaction/transaction (moving money, sales records, everything) without anyone noticing." 2nd part of spoofing is actually tampering (ability to change data "on the fly" without anyone noticing). The actual "man in the middle" attack. The "without anyone noticing" is the defining aspect. One doesn't have to break authentication at all and it's better if he doesn't - the ultimate "not noticing").
Elevation of privilege may or may not be your problem, depending on what's your system providing as a service "over the wire" vs private/secured channels (which are always someone else's problem), who's your client and do you want/have to integrate into a larger system or write your own. You may have to do both but the important aspect is to know what and why.
See how things can get very different quite easily? When someone asks you "do you have a threat model" he's asking "do you know what you have to defend in your, very particular, case".
Threat modeling is the use of models to consider security. That can be really simple, such as "we consider the random oracle threat model," or it can be a more structured and systematic analytic approach, such as using data flow diagrams to model an application and STRIDE to find threats against it.
I advocate for a four-question framework as central to threat modeling:
There are lots of ways to answer each of these - we can model a web app as a state machine. We can use kill chains to address what can go wrong. We can consider eliminate/mitigate/transfer/accept as strategies for dealing with it, and within mitigate, there are lots of prioritization approaches and tactics, such as re-factoring parsing code or adding TLS.
This framework works because it starts from something which is understood and accessible to engineers - the thing they're working on. It also works because there's explicit time for a retrospective, giving a time to make adjustments and help you learn.
It also works because it encompasses and frames a lot of the other work - rather than saying "we use STRIDE to threat model," we can say "we use STRIDE to help us figure out what can go wrong" and that moves us from a discussing what threat modeling is to discussing different ways to do it.
This is a software-centric approach, and there are also asset-centric and attacker-centric approaches. Asset-centric approaches tend to fail because asset inventory is difficult and time consuming; a list often includes things which are difuse, like reputation. Asset-centric approaches also stumble when a software project team takes them on because most of the assets are far outside the scope of the project, or identifying assets in the unique control of the project is difficult. Attacker-persona approaches tend to fail because it's impossible to interview most of your attackers, and 'interview the participants' is a key step in making a persona. They also are problematic making lists of attackers means you're path dependent. If you fail to include kids, or trolls, or nation states, you miss important threats.