August 2026 — the EU AI Act's high-risk AI provisions take effect. If your product uses AI coding tools and touches a regulated domain, you need a documented, reproducible security audit trail. Most teams have none. Here is what compliance actually requires, and how to build it.
The EU AI Act entered into force on August 1, 2024. It rolled out in phases: prohibited AI practices first (February 2025), general-purpose AI model rules next (August 2025). The third phase — the one that directly affects software engineering teams — hits on August 2, 2026: full enforcement of high-risk AI system requirements under Annex I and Annex II.
High-risk systems include software used in medical devices, vehicles, aviation, critical infrastructure, employment decisions, education, law enforcement, border management, and administration of justice. If your product touches any of these domains — and many B2B SaaS products do — the rules apply to you.
The engineering teams we've spoken to know the deadline exists. Most have not connected it to their day-to-day use of AI coding tools. That is the gap.
The EU AI Act does not ban AI coding tools. It does require that if your product is a high-risk system, you can demonstrate it was built with a documented risk management process. Three articles are directly relevant to how code is written and tested.
Requires a systematic risk identification, analysis, and mitigation process — documented and updated throughout the product lifecycle. For engineering teams: you need evidence that security risks in your code were identified and addressed, not just asserted. “We use Copilot and review PRs” is not a risk management system. A scan log with severity distribution, per-file findings, and remediation history is.
Requires detailed documentation of the AI system's components, including a description of the software used. If AI coding tools are generating your production code, that software is part of your system. You need to be able to say which components contain AI-generated code, and prove those components were tested. An SBOM is the foundation. An AI inventory is the layer above it.
Requires automatic logging sufficient to enable traceability over the system's lifetime. For security: you need a record of what vulnerabilities existed at each release, what was remediated, and what was accepted as residual risk. A SARIF file uploaded to GitHub Security after every scan is a record. A verbal code review is not.
Copilot, Claude Code, Cursor, and every AI coding tool that exists today generate code without creating a compliance record. They do not log which functions they wrote. They do not attach a security assessment to the code they generate. They do not tell your compliance team that the authentication handler in src/auth/session.ts was 80% AI-generated and has never been scanned.
This creates a specific compliance problem: you cannot produce the Article 11 documentation because you genuinely do not know the composition of your codebase. You know you used Copilot. You do not know which files it touched, how deeply, or whether those files have been through any security process.
The question regulators will ask
“Which components of this system were generated or modified by AI tools, and what evidence do you have that those components were security-tested?”
Compliance does not require eliminating AI coding tools. It requires building a layer around them that produces documentation. A compliant audit trail has five components:
You need a per-file, per-commit record of AI-generated code in your repository. CodeSlick's AI Code Detection engine runs 164 signals — hallucination patterns, LLM fingerprints (GPT-4, Claude, Copilot), and structural heuristics — to identify AI-generated code at the file level. The Shadow AI Footprint appears in every GitHub App PR comment: total AI signal count, affected file count, and fingerprint breakdown per commit. This is the Article 11 foundation.
Shadow AI Footprint: 47 signals across 12 files — GPT-4 (23), Claude (18), unknown (6)A vulnerability record requires more than a one-time scan. It needs severity distribution (critical / high / medium / low), CWE and OWASP mapping, per-file attribution, and a history that persists across releases. CodeSlick runs 306 security checks across five languages, mapped to OWASP 2025 (95% coverage) and CWE identifiers. Every scan produces a structured result that can be stored, versioned, and presented to a compliance auditor.
SARIF (Static Analysis Results Interchange Format) is the machine-readable format GitHub's Security tab accepts. Uploading a SARIF file after each scan creates a timestamped, per-commit security record directly in your repository — exactly the traceability Article 12 requires. CodeSlick generates SARIF natively. You can upload it to GitHub Security with one CLI flag or automatically via the GitHub App.
An SBOM (Software Bill of Materials) in SPDX 2.3 or CycloneDX 1.4 format is the standard format regulators and procurement offices request for Article 11 documentation. CodeSlick generates both formats on demand — listing every dependency, its version, license, and known vulnerabilities. For AI-assisted products, the SBOM is the baseline; the AI Inventory sits on top of it.
Article 9 requires not just identifying risks but mitigating them. Pass/Fail Thresholds in CodeSlick enforce security policy at the PR level: a PR with a new critical vulnerability fails the GitHub status check. CLI exit codes propagate security failures to CI/CD pipelines. The enforcement creates a paper trail — a record of every time a security gate was applied and every time a threshold was met or broken.
The newest CodeSlick capability closes the compliance loop at the generation layer. The get_repo_security_context MCP tool allows AI coding agents — Claude Code, Cursor, Copilot, any MCP-compatible tool — to query the repository's security history before generating code.
When an agent is about to generate an authentication function, it can retrieve the repo's risk profile: which vulnerability categories recur, which files carry unresolved critical issues, what the current pass/fail threshold status is. The agent generates code with that context — not from scratch.
This is compliance-relevant because it creates a documented relationship between the security audit trail and the code generation step. The question “what security context was the AI working with when it wrote this?” becomes answerable.
Four months is enough time to build a defensible audit trail. It is not enough time to retrofit a compliance program from scratch after an enforcement notice. The sequence that works:
Most security tools treat the scan result as the end point. For EU AI Act compliance, the scan result is the beginning of the documentation chain. What matters is that the result is structured, persistent, version-linked, and retrievable — not just that it existed once.
CodeSlick is designed to produce that chain: SBOM, SARIF, Shadow AI Footprint, per-PR security gate, security history via MCP. Every output is a compliance artifact. The goal was always to build the infrastructure layer that sits between AI coding tools and production — and Article 9, 11, and 12 describe exactly why that layer matters.
Baseline scan in under 3 seconds. No sign-up required for the WebTool. GitHub App installation takes 2 minutes and starts recording from the first PR.