What Is Threat Modeling
Threat modeling is a structured approach to identifying, quantifying, and addressing security risks in an application before they become vulnerabilities in production. Rather than waiting for a penetration test or a breach, threat modeling proactively asks: what can go wrong, and what are we doing about it?
A threat model analyzes four key questions:
- What are we building? — System architecture, data flows, trust boundaries
- What can go wrong? — Threats classified by methodology (STRIDE, attack trees)
- What are we going to do about it? — Mitigations, controls, and acceptance decisions
- Did we do a good job? — Validation through testing and automated analysis
Threat modeling is most effective when performed during the design phase, before code is written. However, it remains valuable at any stage—existing applications benefit from threat models that uncover architectural risks that unit tests and code reviews miss. The output is a prioritized list of threats with concrete mitigations, giving development teams a security roadmap tied to business risk.
The STRIDE Methodology
STRIDE is a threat classification framework developed at Microsoft that categorizes threats by the security property they violate:
- Spoofing: Pretending to be another user or system. Violates authentication. Example: forging a JWT token to impersonate an admin.
- Tampering: Modifying data in transit or at rest. Violates integrity. Example: altering a price parameter in an API request.
- Repudiation: Denying an action was performed. Violates non-repudiation. Example: deleting audit logs after unauthorized access.
- Information Disclosure: Exposing data to unauthorized parties. Violates confidentiality. Example: API endpoint returning full user objects including password hashes.
- Denial of Service: Making a system unavailable. Violates availability. Example: ReDoS attack crashing a service with a crafted input string.
- Elevation of Privilege: Gaining unauthorized access levels. Violates authorization. Example: IDOR allowing a regular user to access admin endpoints.
Attack Trees
Attack trees complement STRIDE by mapping the steps an attacker would take to achieve a goal. The root node is the attacker's objective (e.g., "steal user credentials"), and child nodes represent the methods to achieve it. Each leaf node represents a concrete attack vector that can be tested and mitigated. Attack trees help prioritize threats by visualizing which attack paths are most feasible.
When and How to Threat Model
The most effective time to threat model is during design, but teams that skip it during initial development can still benefit by modeling existing systems. Key moments to threat model include:
- New feature design that introduces authentication, authorization, or data handling changes
- Architecture changes such as adding a new microservice, external API, or database
- After a security incident to identify systemic weaknesses
- Before compliance audits (SOC 2, PCI-DSS, HIPAA)
A practical threat modeling session involves drawing a data flow diagram, identifying trust boundaries (where data crosses from trusted to untrusted zones), applying STRIDE to each boundary crossing, and documenting mitigations. Keep sessions focused: 60–90 minutes with developers who built the system.
How SAST Validates Threat Models
A threat model is only as good as its implementation. Static Application Security Testing (SAST) validates that the mitigations documented in your threat model are actually present in the code.
- Spoofing mitigations: CodeSlick detects hardcoded credentials, weak JWT validation, and missing authentication checks
- Tampering mitigations: Detects missing input validation, prototype pollution, and mass assignment vulnerabilities
- Information Disclosure: Flags secrets in source code (38 patterns), verbose error messages, and sensitive data in logs
- Denial of Service: Identifies ReDoS patterns and missing rate limiting
- Elevation of Privilege: Catches IDOR patterns, missing authorization checks, and path traversal
CodeSlick covers 294 security checks across JavaScript, TypeScript, Python, Java, and Go, mapping every finding to CWE and OWASP classifications—the same frameworks used in threat modeling.