Security Incident · April 2026

The “Non-Sensitive” Env Var Myth: What the Vercel Breach Actually Exposed

Vercel splits environment variables into “sensitive” and “non-sensitive.” In April 2026, attackers read the non-sensitive ones. The distinction that felt like a safety net turned out to be an attack surface.

April 20, 2026
9 min read
Security Research

TL;DR

  • Vercel's April 2026 breach exposed customer env vars that weren't explicitly flagged “sensitive.” A compromised third-party AI tool (Context.ai) gave attackers access to an employee's Google Workspace — and from there, to production environment data.
  • The “non-sensitive” label is a categorization problem masquerading as a security control. DB hostnames, internal API endpoints, feature flags, and OAuth redirect URIs don't look secret — but attackers chain them into full compromise.
  • The real problem starts before deployment: secrets and credentials that never should have been in env vars in the first place, or that were hardcoded in source during development and never caught.
  • CodeSlick detects 38 secret patterns before you push — in code, config files, and .env files — with zero false positives on placeholder values and test data.

What happened at Vercel

On April 19, 2026, Vercel published a security bulletin disclosing unauthorized access to internal systems. The attack chain is worth understanding precisely, because it follows a pattern that is going to become more common as AI tooling proliferates.

1
Initial vector

Context.ai — a third-party AI tool used by a Vercel employee — was compromised by an attacker.

2
Lateral movement

The attacker used that access to take over the employee's Google Workspace account via OAuth or session hijack.

3
Data access

From Google Workspace, the attacker pivoted into Vercel's internal systems and read environment variables for a subset of customer projects.

4
Partial protection

Env vars explicitly flagged 'sensitive' in Vercel's UI were stored encrypted and were not accessed. Non-sensitive ones were.

Vercel engaged Mandiant and law enforcement. Services stayed operational. Affected customers were contacted and advised to rotate credentials, review activity logs, and enable sensitive variable protection. The right response — but it came after the exposure, not before.

Why “non-sensitive” is a broken security category

Every platform that handles secrets eventually creates a two-tier system: sensitive things and everything else. It feels like a reasonable UX decision. Marking every env var as sensitive is noise; marking nothing is negligent. A checkbox lets developers express intent.

The problem: the taxonomy is based on how a value looks, not what an attacker can do with it.

Developers mark STRIPE_SECRET_KEYas sensitive because the name contains “secret.” They don't mark DATABASE_HOSTbecause it's just a hostname. But for an attacker who already knows the database port and has tried common credential defaults, the hostname is the last piece they needed.

Here are the “non-sensitive” variable categories that consistently appear in post-breach forensics:

Internal service URLs

DATABASE_HOST, REDIS_URL, INTERNAL_API_BASE_URL

Network topology exposure — tells attackers what services exist and where.

OAuth redirect URIs

NEXTAUTH_URL, OAUTH_CALLBACK_URL

Enables redirect-based phishing and open redirect attacks.

Feature flags & config

ENABLE_ADMIN_PANEL=true, DEBUG_MODE, RATE_LIMIT_OVERRIDE

Reveals privileged attack surfaces that may have weaker validation.

Third-party app IDs

STRIPE_PUBLISHABLE_KEY, POSTHOG_KEY, GITHUB_APP_ID

App IDs alone enable targeted phishing and abuse of linked services.

Infrastructure identifiers

AWS_ACCOUNT_ID, GCP_PROJECT_ID, REGION

Precise enumeration for cloud-targeted attacks — no guessing.

Partial credentials

DB_USERNAME (without the password)

Half a credential still halves the brute-force space.

None of these contain a “secret” in the traditional sense. All of them contribute to a successful attack. The “sensitive” checkbox creates a mental model where unmarked vars are implicitly safe. They are not.

The attack vector you need to think about: AI tooling as a supply chain threat

The initial compromise in Vercel's breach wasn't a phishing email or a leaked password. It was a third-party AI productivity tool — Context.ai — that had OAuth access to an employee's account.

This is the pattern to watch. AI tools in 2026 almost universally request broad OAuth scopes: “Read your email,” “Access your calendar,” “View your Google Drive.” When one of those tools is compromised — whether through a vulnerability in the tool itself, a supply chain attack on its dependencies, or a breach of the tool vendor — every account that granted it OAuth access becomes a pivot point.

This is exactly the threat model CodeSlick's MCP security checks were designed for.

MCP servers — the protocol powering AI agent tool use in Claude Desktop, Cursor, and others — run as local processes with your credentials. If an MCP server's tool handler is vulnerable to command injection or prompt injection, an attacker who can influence the LLM's inputs can pivot through the MCP layer into your local environment. The Vercel breach is that attack — with Context.ai as the pivot — at the infrastructure level.

What a CodeSlick scan finds in a typical Next.js/Vercel project

We ran CodeSlick's secrets detector across a representative Next.js project scaffolded from create-next-app with a standard auth + database setup. These are the patterns that appear before deployment — in source code, config files, and committed environment examples — that feed directly into the attack surface the Vercel breach exploited.

CRITICALSEC-CREDS-001

Hardcoded credentials in source files

The most common finding across every stack we scan. A developer uses a real credential during local development, commits it to speed things up, and intends to replace it “before merge.” It ships. Sometimes it ships as an env var reference that was never wired up.

What CodeSlick flags
// lib/db.ts
const client = new PrismaClient({
  datasources: {
    db: {
      url: "postgresql://admin:S3cr3tPass@prod-db.internal:5432/myapp"
    }
  }
})

// lib/auth.ts
const secret = "jwt_secret_hardcoded_during_dev_replace_me"  // never replaced

// .env.example (committed to git — real values left in by mistake)
DATABASE_URL=postgresql://admin:S3cr3tPass@prod-db.internal:5432/myapp
STRIPE_SECRET_KEY=sk_live_abc123realkey
Why it matters post-Vercel breach: If an attacker reads your env vars from the platform and finds DATABASE_URLis a non-sensitive var containing a real password, the “non-sensitive” label was meaningless. The credential was the problem — not the label.
HIGHSEC-CREDS-003

API keys in client-side code (exposed to the browser)

Next.js prefixes client-exposed env vars with NEXT_PUBLIC_. Developers sometimes use that prefix for convenience on vars that should never reach the client. The browser bundle exposes them to anyone who loads the page — no breach required.

What CodeSlick flags
// Inadvertently exposing a server-side key to the browser bundle
const stripe = new Stripe(process.env.NEXT_PUBLIC_STRIPE_SECRET_KEY!)
//                                   ^^^^^^^^^^^^ wrong prefix

// OpenRouter / AI provider key leaked client-side
const openai = new OpenAI({ apiKey: process.env.NEXT_PUBLIC_OPENROUTER_KEY })
HIGHSEC-CREDS-007

Committed .env files with real values

.env is in .gitignore..env.local usually is too. But .env.example and .env.stagingoften aren't — and they frequently contain real credentials that “just need to be updated before going live.” CodeSlick detects credential patterns in all .env.*files, regardless of whether they're gitignored.

MEDIUMSEC-CREDS-012

Private keys and certificates in the repository

RSA private keys, PEM certificates, and JWT signing keys committed to version control. Appears in repos with GitHub App integrations, self-signed TLS for local dev, and service-to-service auth. Once it's in git history, rotation isn't enough — the history needs to be rewritten.

What CodeSlick flags
// Found in: lib/github-app.ts
const privateKey = `-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEA2a2rwplBQLF29amygykEMmYz0+Kcj3bKBp29S2rWPHO...
-----END RSA PRIVATE KEY-----`
CodeSlick Secrets Detector — 38 patterns across all supported languages
Pattern categoryExamples detectedSeverity
API keysOpenAI, Stripe, SendGrid, Twilio, AWS, GCPCritical
Database URLs with credentialspostgres://, mongodb://, mysql://Critical
Private keys / certificatesRSA, EC, PEM, PKCS8Critical
JWT secretsNEXTAUTH_SECRET, JWT_SECRET, signing keysHigh
OAuth secretsclient_secret, app_secret, consumer_secretHigh
Webhook secretsSTRIPE_WEBHOOK_SECRET, GITHUB_WEBHOOK_SECRETHigh
.env files with real values.env.example, .env.staging, .env.productionHigh
Password patternsHardcoded password= / passwd= / pwd= assignmentsMedium

The defense: catch secrets before they ever reach a platform

Vercel's breach happened at the platform level — credentials that were already deployed. Platform-level protections (sensitive flags, access controls, audit logs) are necessary but insufficient. The problem starts in the developer's local environment, before a commit is ever made.

CodeSlick's CLI pre-commit hook blocks the commit if secrets are detected. No network call. No CI round trip. The secret never reaches your repository, your platform, or any vendor's storage — regardless of how their “sensitive” classification works.

Install in 30 seconds
# Install the CodeSlick CLI
npm install -g @codeslick/cli

# Initialize pre-commit hook in your repo
codeslick init

# The hook now runs on every git commit:
# ✓ 38 secret patterns (zero false positives on placeholders/test values)
# ✓ All 5 languages: JavaScript, TypeScript, Python, Java, Go
# ✓ Blocks commit if secrets found — with exact line numbers and remediation guidance
# ✓ Pass/fail thresholds configurable via .codeslick.yml
What a blocked commit looks like
$ git commit -m "add db connection"

CodeSlick pre-commit check...

  CRITICAL  lib/db.ts:14
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Hardcoded database credentials detected
  Rule: SEC-CREDS-001 · CVSS 9.1

  Found: postgresql://admin:S3cr3tPass@prod-db.internal:5432/myapp
  Fix:   Replace with process.env.DATABASE_URL

  HIGH      .env.example:3
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
  Real API key detected in committed .env file
  Rule: SEC-CREDS-007 · CVSS 8.2

  Found: STRIPE_SECRET_KEY=sk_live_...
  Fix:   Use a placeholder value: STRIPE_SECRET_KEY=sk_live_YOUR_KEY_HERE

2 secret(s) found. Commit blocked.
Run `codeslick fix` to apply suggested remediations.

Before git push

Secrets caught at commit time — before they reach any remote, any CI system, or any deployment platform.

Zero false positives on placeholders

Placeholder values like YOUR_KEY_HERE, sk_test_*, and example.com are excluded. You ship clean, not paralyzed.

GitHub App: PR-level blocking

If a secret gets past the pre-commit hook, CodeSlick's GitHub App catches it on the PR and fails the check — before merge to main.

The lesson: platform labels are not a security control

Vercel will improve. They'll add better defaults for sensitive flags, tighter OAuth scope review for third-party tools, and more granular access controls. Those are good changes. They won't solve the underlying problem.

The underlying problem is that credentials — real credentials — end up in places they shouldn't. They end up in source files during development. They end up in .env.example when someone copies a real value for reference. They end up as “non-sensitive” env vars because the developer didn't recognize the risk, not because the value was actually low-risk.

The control that actually prevents platform-level exposure is keeping secrets out of the platform in the first place — or ensuring that anything that reaches the platform is either rotatable in under five minutes or genuinely non-sensitive (think: public feature flags). That means catching them at the source: in your editor, in your pre-commit hook, in your PR check.

The three-layer defense

  1. 1Pre-commit hook — CodeSlick CLI blocks commits with secrets. No network. Instant. Zero false positives on test values.
  2. 2PR check — CodeSlick GitHub App fails the PR if a secret slips past the local hook. Stops it before merge.
  3. 3Platform hygiene — Mark every env var as sensitive on Vercel (or equivalent). Enable audit logs. Review third-party tool OAuth scopes quarterly.

Layers 1 and 2 are the ones that prevent the Vercel-style breach. Layer 3 is defense in depth for when layers 1 and 2 fail.

Scan your repo before the next breach

CodeSlick detects 38 secret patterns, 306 security vulnerabilities, and malicious dependencies — in your editor, your CI pipeline, and your pre-commit hook. Free for open-source. No account required for the WebTool.

The pre-commit hook is free. Unlimited local scans. Works offline.