8 min readProduct

CodeSlick MCP Server: 6 Security Tools for Cursor and Claude Desktop

You can now ask your AI coding environment: “Does this code have SQL injection?” or “What AI providers does this application use?” — and get a CVSS-scored, OWASP-mapped answer with remediation guidance. No account. No cloud. Analysis runs on your machine.

Install in 60 seconds

# Prerequisites: npm install -g codeslick
# Then add to ~/.cursor/mcp.json or Claude Desktop config:
{
"mcpServers": {
"codeslick": {
"command": "npx",
"args": ["-y", "codeslick-mcp-server@latest"]
}
}
}

Restart your AI tool. Six security tools are now available in your chat context.

The Problem: AI Tools Don't Know Your Code Is Vulnerable

When you write code in Cursor or iterate with Claude Desktop, the AI can suggest fixes, refactor logic, and explain patterns. What it cannot do — by default — is run a proper security scanner on what you just wrote. It can guess. It can pattern-match from training data. But it cannot run 306 deterministic security checks with CVSS scores, CWE mappings, and verified remediation examples.

The CodeSlick MCP Server closes that gap. It connects CodeSlick's static analysis engine directly to your AI environment. When you ask your AI to check code for vulnerabilities, it now has a real security tool to call — not just its own pattern memory.

The Six Tools

analyze_code

Runs CodeSlick's full static analysis on a code snippet. Returns a security score, vulnerability list with CVSS scores and OWASP/CWE mappings, and remediation guidance. Supports JavaScript, TypeScript, Python, Java, Go, Terraform, and Kubernetes YAML.

Example interaction:

“Check this Express route for security issues”

# CodeSlick Security Analysis Security Score: 30/100 Total Vulnerabilities: 3 [CRITICAL] SQL Injection — CVSS 9.8 / OWASP A03:2025 / CWE-89 [HIGH] Error Exposure — CVSS 7.5 / OWASP A10:2025 / CWE-209 [HIGH] Unvalidated Input — CVSS 7.3 / CWE-20
306 checksOWASP 2025CVSS 3.17 languagesRemediation examples

detect_secrets

Scans source code for 38 hardcoded secret patterns: AWS keys, GitHub tokens, Stripe API keys, database passwords, JWT secrets, private keys, OAuth tokens, and more. Returns line numbers, pattern type, severity, and remediation steps.

38 secret patternsAWS/GitHub/Stripe/HerokuDB passwordsJWT secretsPrivate keys

scan_dependencies

Parses a dependency manifest (package.json, requirements.txt, pom.xml, build.gradle, go.mod) and cross-references against known CVEs and 66 flagged malicious packages. Returns CVE IDs, upgrade paths, and severity breakdown.

npm / pip / Maven / Gradle / GoCVE database66 malicious packagesUpgrade recommendations

generate_sbom

Generates a Software Bill of Materials from a dependency manifest. Outputs SPDX 2.3, CycloneDX 1.4, or both. Each component includes name, version, PURL, and license. Satisfies NTIA minimum elements and EU Cyber Resilience Act (CRA) requirements.

SPDX 2.3CycloneDX 1.4PURL identifiersNTIA compliantEU CRA ready

generate_ai_bom

New

Generates an AI Bill of Materials — a structured inventory of every AI component in a codebase. Detects which AI providers are called, which models are used, which MCP tools are registered, which agent frameworks are imported, and which API keys are referenced. Works on single files or multi-file projects.

This is the tool that answers the question compliance teams are starting to ask: “What AI is this software calling, and with what keys?”

Detected components from a real MCP server codebase:

Providers:OpenAI (gpt-4), Anthropic (claude-3-5-sonnet)
MCP tools:analyze-code, summarize
API keys:OPENAI_API_KEY (env), ANTHROPIC_API_KEY (env)
Frameworks:None detected
13 AI providersMCP tool registry11 agent frameworksAPI key auditEU AI Act ready

check_security_policy

Coming in v1.5

Enforce a .codeslick.ymlsecurity policy file inside your AI environment. Define thresholds, blocked severity levels, and required checks — then ask your AI to validate code against your team's policy before merging.

Why AI-BOM Is Different from SBOM

Every compliance framework that mentions software inventory was written before LLMs were embedded in production applications. SBOM tells you which open-source libraries your code links against. It says nothing about which AI model is processing your users' data, which agent framework is making autonomous decisions, or whether your API keys are referenced in source code or pulled from a secrets manager.

The EU AI Act, the NIST AI RMF, and emerging enterprise security policies are all starting to require AI component inventories. “What AI systems does this application use?” is now a question that compliance teams, security auditors, and procurement departments ask — and most engineering teams cannot answer it systematically.

generate_ai_bom answers it. Point it at any codebase and get a structured inventory of every AI component: providers, models, tools, frameworks, and key references. It is the only tool in any MCP server that does this.

Local Analysis. No Account. No Telemetry.

Every analysis runs on your machine. The MCP server shells out to the locally installed CodeSlick CLI — the same binary used in pre-commit hooks and CI pipelines. Your source code never leaves your environment.

ToolNetwork calls?Account required?Code leaves machine?
analyze_codeNoNoNo
detect_secretsNoNoNo
generate_sbomNoNoNo
generate_ai_bomNoNoNo
scan_dependenciesOSV.dev onlyNoPackage names only

Performance in Practice

We ran all five tools against test payloads during integration testing in both Cursor and Claude Desktop. Response times on a MacBook Air M1:

ToolTest payloadResponse time
analyze_code23-line JS file, 3 vulns~2s
detect_secrets24-line Python file, 5 secrets~2s
scan_dependenciespackage.json, 3 deps, 7 CVEs<1s
generate_sbompackage.json, 3 components<1s
generate_ai_bom29-line TS file, 2 providers<1s

The CLI-based tools (analyze_code, detect_secrets) write a temp file and shell out to the CodeSlick binary — the 1–2 second overhead is the CLI startup cost, not the analysis itself. The in-process tools (SBOM, dependencies, AI-BOM) parse manifests directly and return sub-second.

Installation

The MCP server requires the CodeSlick CLI installed globally. One-time setup:

terminal
npm install -g codeslick

Cursor — add to ~/.cursor/mcp.json:

~/.cursor/mcp.json
{
  "mcpServers": {
    "codeslick": {
      "command": "npx",
      "args": ["-y", "codeslick-mcp-server@latest"],
      "env": {}
    }
  }
}

Claude Desktop (macOS) — add to ~/Library/Application Support/Claude/claude_desktop_config.json:

claude_desktop_config.json
{
  "mcpServers": {
    "codeslick": {
      "command": "npx",
      "args": ["-y", "codeslick-mcp-server@latest"]
    }
  }
}

Claude Code (CLI):

terminal
claude mcp add codeslick -- npx -y codeslick-mcp-server@latest

Restart your AI tool after updating the config. The five tools will appear in the MCP tools panel (hammer icon in Claude Desktop, MCP panel in Cursor).

What's Next

The next release adds check_security_policy — a tool that reads a .codeslick.ymlfile and validates code against your team's defined security thresholds. The same policy file that blocks a CI pipeline will be enforceable from inside Cursor and Claude Desktop. Define it once, enforce it everywhere.

We are also expanding the MCP-specific check set. Our ecosystem scan of 32 community MCP servers identified recurring patterns — prompt injection vectors, missing tool parameter validation, unsafe data pass-through — that warrant dedicated detection rules beyond the 12 checks in v1.4.

Get Started

Install takes under two minutes. No account, no API key, no cloud dependency.