Compliance · April 2026

EU AI Act: What an Audit Trail
for AI-Generated Code
Actually Looks Like

August 2026 — the EU AI Act's high-risk AI provisions take effect. If your product uses AI coding tools and touches a regulated domain, you need a documented, reproducible security audit trail. Most teams have none. Here is what compliance actually requires, and how to build it.

Aug 2
2026 enforcement deadline
8
High-risk AI domains
164
AI code detection signals
4
Months remaining

The deadline most engineering teams are ignoring

The EU AI Act entered into force on August 1, 2024. It rolled out in phases: prohibited AI practices first (February 2025), general-purpose AI model rules next (August 2025). The third phase — the one that directly affects software engineering teams — hits on August 2, 2026: full enforcement of high-risk AI system requirements under Annex I and Annex II.

High-risk systems include software used in medical devices, vehicles, aviation, critical infrastructure, employment decisions, education, law enforcement, border management, and administration of justice. If your product touches any of these domains — and many B2B SaaS products do — the rules apply to you.

The engineering teams we've spoken to know the deadline exists. Most have not connected it to their day-to-day use of AI coding tools. That is the gap.

What the law actually requires from engineering teams

The EU AI Act does not ban AI coding tools. It does require that if your product is a high-risk system, you can demonstrate it was built with a documented risk management process. Three articles are directly relevant to how code is written and tested.

Article 9 — Risk Management

Requires a systematic risk identification, analysis, and mitigation process — documented and updated throughout the product lifecycle. For engineering teams: you need evidence that security risks in your code were identified and addressed, not just asserted. “We use Copilot and review PRs” is not a risk management system. A scan log with severity distribution, per-file findings, and remediation history is.

Article 11 — Technical Documentation

Requires detailed documentation of the AI system's components, including a description of the software used. If AI coding tools are generating your production code, that software is part of your system. You need to be able to say which components contain AI-generated code, and prove those components were tested. An SBOM is the foundation. An AI inventory is the layer above it.

Article 12 — Record-Keeping

Requires automatic logging sufficient to enable traceability over the system's lifetime. For security: you need a record of what vulnerabilities existed at each release, what was remediated, and what was accepted as residual risk. A SARIF file uploaded to GitHub Security after every scan is a record. A verbal code review is not.

The specific problem AI coding tools create

Copilot, Claude Code, Cursor, and every AI coding tool that exists today generate code without creating a compliance record. They do not log which functions they wrote. They do not attach a security assessment to the code they generate. They do not tell your compliance team that the authentication handler in src/auth/session.ts was 80% AI-generated and has never been scanned.

This creates a specific compliance problem: you cannot produce the Article 11 documentation because you genuinely do not know the composition of your codebase. You know you used Copilot. You do not know which files it touched, how deeply, or whether those files have been through any security process.

The question regulators will ask

“Which components of this system were generated or modified by AI tools, and what evidence do you have that those components were security-tested?”

What a compliant audit trail looks like

Compliance does not require eliminating AI coding tools. It requires building a layer around them that produces documentation. A compliant audit trail has five components:

1. AI Inventory — which code was AI-generated

You need a per-file, per-commit record of AI-generated code in your repository. CodeSlick's AI Code Detection engine runs 164 signals — hallucination patterns, LLM fingerprints (GPT-4, Claude, Copilot), and structural heuristics — to identify AI-generated code at the file level. The Shadow AI Footprint appears in every GitHub App PR comment: total AI signal count, affected file count, and fingerprint breakdown per commit. This is the Article 11 foundation.

Shadow AI Footprint: 47 signals across 12 files — GPT-4 (23), Claude (18), unknown (6)

2. Vulnerability Record — what security issues exist

A vulnerability record requires more than a one-time scan. It needs severity distribution (critical / high / medium / low), CWE and OWASP mapping, per-file attribution, and a history that persists across releases. CodeSlick runs 306 security checks across five languages, mapped to OWASP 2025 (95% coverage) and CWE identifiers. Every scan produces a structured result that can be stored, versioned, and presented to a compliance auditor.

3. SARIF Upload — evidence at the point of review

SARIF (Static Analysis Results Interchange Format) is the machine-readable format GitHub's Security tab accepts. Uploading a SARIF file after each scan creates a timestamped, per-commit security record directly in your repository — exactly the traceability Article 12 requires. CodeSlick generates SARIF natively. You can upload it to GitHub Security with one CLI flag or automatically via the GitHub App.

4. SBOM — software composition for regulators

An SBOM (Software Bill of Materials) in SPDX 2.3 or CycloneDX 1.4 format is the standard format regulators and procurement offices request for Article 11 documentation. CodeSlick generates both formats on demand — listing every dependency, its version, license, and known vulnerabilities. For AI-assisted products, the SBOM is the baseline; the AI Inventory sits on top of it.

5. Security Gates — enforcement, not just observation

Article 9 requires not just identifying risks but mitigating them. Pass/Fail Thresholds in CodeSlick enforce security policy at the PR level: a PR with a new critical vulnerability fails the GitHub status check. CLI exit codes propagate security failures to CI/CD pipelines. The enforcement creates a paper trail — a record of every time a security gate was applied and every time a threshold was met or broken.

Closing the loop: AI agents that know the compliance state

The newest CodeSlick capability closes the compliance loop at the generation layer. The get_repo_security_context MCP tool allows AI coding agents — Claude Code, Cursor, Copilot, any MCP-compatible tool — to query the repository's security history before generating code.

When an agent is about to generate an authentication function, it can retrieve the repo's risk profile: which vulnerability categories recur, which files carry unresolved critical issues, what the current pass/fail threshold status is. The agent generates code with that context — not from scratch.

This is compliance-relevant because it creates a documented relationship between the security audit trail and the code generation step. The question “what security context was the AI working with when it wrote this?” becomes answerable.

What to do in the next four months

Four months is enough time to build a defensible audit trail. It is not enough time to retrofit a compliance program from scratch after an enforcement notice. The sequence that works:

  1. 1.Run a baseline scan today. Establish what your current vulnerability state is. You cannot document “we remediated X critical findings” if you do not know what X is. CodeSlick's WebTool scans in under 3 seconds, no sign-up required.
  2. 2.Install the GitHub App on your main repository. Every PR gets a security assessment and Shadow AI Footprint annotation from this point forward. The record starts accumulating.
  3. 3.Generate your SBOM. Download SPDX and CycloneDX outputs. These are the documents your legal team will need when a procurement or compliance questionnaire arrives.
  4. 4.Set a Pass/Fail threshold. Define what “acceptable risk” means for your product. Enforce it in CI. The enforcement record is part of your Article 9 risk management documentation.
  5. 5.Enable SARIF upload to GitHub Security. This creates the per-commit vulnerability record required for Article 12 traceability. One flag. Persistent record.

The audit trail is the product

Most security tools treat the scan result as the end point. For EU AI Act compliance, the scan result is the beginning of the documentation chain. What matters is that the result is structured, persistent, version-linked, and retrievable — not just that it existed once.

CodeSlick is designed to produce that chain: SBOM, SARIF, Shadow AI Footprint, per-PR security gate, security history via MCP. Every output is a compliance artifact. The goal was always to build the infrastructure layer that sits between AI coding tools and production — and Article 9, 11, and 12 describe exactly why that layer matters.

Start building your audit trail

Baseline scan in under 3 seconds. No sign-up required for the WebTool. GitHub App installation takes 2 minutes and starts recording from the first PR.