Engineering · April 2026

Security is a risk management discipline,
not a hygiene checklist.

Most security tools give you a list of findings. They don't tell you which ones actually matter. That's not a feature gap — it's a philosophical problem with how security tooling has been built for the last decade.

April 29, 2026
8 min read
Engineering

Your security scanner just gave you 47 findings.

You fix 3 this sprint. The other 44 stay open — not enough hours, not enough urgency, or you've seen them before and nothing bad happened yet.

That's not a security failure. That's reality. The question is whether your tools are designed for that reality or pretending it doesn't exist.

Earlier this year, Snyk flagged a real transitive vulnerability in our own production repo. We were only parsing package.json — the same shortcut most lightweight scanners take. They were right. We fixed it the same day and shipped full lockfile parsing. Full story

That incident forced us to examine something uncomfortable: we'd built a security tool that had the checklist problem. We were telling users to fix what we flagged without helping them understand which findings actually mattered for their architecture.

That's what this post is about.

The checklist problem

CVSS 9.8

Internal utility function

Called by a scheduled job. No external network access. Unreachable from untrusted input.

CVSS 7.0

Public search endpoint

Unauthenticated. SQL injection. Directly reachable from the internet.

Which one matters? Every checklist tool reports both. The 9.8 looks scarier. The 7.0 is the one that gets you breached.

Security as hygiene says: fix everything, zero findings. Security as risk management asks different questions:

The three questions that matter more than any CVSS score

1

Is this code path reachable from untrusted input?

2

Is it exposed publicly, or protected by authentication?

3

What's the blast radius if exploited?

The checklist mindset isn't just a tool problem — it's an industry habit. When every finding is treated with equal urgency, developers learn to treat all of them with equal disregard. Alert fatigue is the direct, predictable result of scoring findings in isolation, divorced from the actual attack surface of the system being scanned.

What CodeSlick actually does

Pattern-based SAST and dependency scanning. Precise about what that means — and what it doesn't.

What it catches

  • 306 security checks across JS, TS, Python, Java, Go — injection, deserialization, secrets, crypto misuse
  • Full lockfile parsing, not just package.json — entire transitive dep tree, 1,000+ packages per OSV query
  • 164 signals for AI-generated code: hallucination patterns, LLM fingerprints, framework misuse
  • Supply chain: malicious package patterns, behavioral heuristics for zero-day transitive deps

What it doesn't

  • Cross-file taint analysis — if a vulnerability spans three modules, CodeSlick won't trace it today
  • Reachability — if a vulnerable dep function is never called in your code, it still gets flagged
  • Architectural design flaws — missing auth layers, insecure component boundaries, logic errors

These aren't accidents — they're deliberate tradeoffs. Full inter-procedural taint analysis exists. It's what Veracode and Checkmarx sell at enterprise prices with scan times measured in minutes and false positive rates that train developers to start ignoring alerts. We made different choices: sub-5-second scans, findings that are actually findings, noise low enough that the alerts get acted on.

The honest question isn't "why don't you do taint analysis?" It's "what does taint analysis cost in speed and noise, and is that the right tradeoff for your team?" For most teams scanning in CI on every commit, it isn't.

The "economically unsafe" class

A senior fintech engineer described a pattern last week that stuck: AI-generated payment code that is "economically unsafe."

Looks clean. Passes every test. Works fine — until retries hit, or concurrency, or real transaction volume. Then it charges the card twice. Or deducts the balance twice. Or creates two accounts for the same user.

No CVE covers it. No dependency scanner catches it.

It lives in the logic of the application — in the assumptions the code makes about how the world behaves under pressure. A missing idempotency key on a payment handler. A race condition in an async balance update. A retry loop with no deduplication guard. CVSS doesn't score these. Your test suite stays green. Nothing fires.

CodeSlick's AI detection catches a subset of this — the structural patterns that LLMs consistently get wrong. But the broader point is directional: the risks that matter in 2026 aren't only in your dep tree. They're increasingly in the code your team wrote with AI assistance at 4pm on a Friday, confident it looked right.

Where this goes

We're building contextual risk scoring into CodeSlick — adjusting findings based on actual exposure. Public API route vs internal utility. Unauthenticated endpoint vs protected resource. Production config vs test fixture.

Same vulnerability signature. Different risk profile. Different priority.

It won't replace fast pattern scanning — it extends it. Foundation plus context is what gets you from a list to a decision.

The security industry has spent 20 years building better lists.

A hundred tools will give you more findings, faster, mapped to more frameworks. The unsolved problem is building better judgment — helping teams understand what actually matters in their architecture, under their threat model, with their actual resources.

That's the direction. Fast, honest, context-aware — and always explicit about what it can and can't see.

Related posts

See what CodeSlick finds in your codebase

The GitHub App is free for one repo. First scan takes about 5 seconds.

Install on GitHub