Article 4 enforcement active since February 2025

EU AI Act compliance
in 30 minutes

Answer 20 questions about your company and AI systems. We classify your risk level, determine your obligations, and generate all legally required documents.

Free. No account. No credit card.

20 questions
4 documents generated
30 minutes start to finish
€0 to get started

How it works

Three steps to full documentation

01

Describe your AI systems

Tell us what AI tools your company uses, who deploys them, what decisions they affect, and who the end users are. The questionnaire takes 15 to 20 minutes.

02

We classify your risk level

The system applies the Act's classification logic: Article 5 prohibited uses, Article 6 and Annex III high-risk categories, Article 6(3) exceptions. You see the result immediately with full article citations.

03

Download your documents

Every document is generated instantly, filled with your company details, and formatted for adoption. Download as HTML, print as PDF, or copy into your existing policy management system.

What you get

All four required documents

The EU AI Act creates documentation obligations across four areas. We generate each one, pre-filled with your details.

Art. 4 Required now

AI Literacy Policy

Documents the AI literacy programme your organisation must maintain for all staff working with or affected by AI systems. Covers training requirements, roles, and review schedule.

Arts. 6, 9 + Annex III All companies

Risk Classification Memo

A formal analysis of where each AI system falls in the Act's four-tier risk hierarchy. Includes Article 6(3) exception assessment and a full list of applicable obligations by tier.

Art. 26 High-risk only

Usage Policy for Deployers

Defines permitted uses, prohibited uses, human oversight requirements, input data standards, and incident reporting procedures for high-risk AI systems your organisation deploys.

Art. 50 If user-facing AI

AI Interaction Disclosure

Required whenever users interact with AI systems directly. Covers disclosure wording, synthetic content labelling, complaints channels, and the rights of affected persons.

Why AIActComply

A starting point, not a black box

Written against the final text

Every document template is drafted against Regulation (EU) 2024/1689 as published in the Official Journal of the European Union. Article citations are specific and verifiable.

No external dependencies

Documents are generated from structured templates, not AI models. No API keys. No per-document fees. No usage data sent anywhere. What you type stays in your browser session.

A foundation for legal review

We generate the first draft. A qualified lawyer can review, adjust, and adopt it. That review costs a fraction of a ground-up engagement. We are not a law firm and the output is not legal advice.

Questions

What people ask before starting

Does the EU AI Act apply to my company?

If your company is established in the EU, or if you offer AI-enabled products or services to people in the EU, the Act applies to you. That includes companies headquartered outside the EU if their AI tools reach EU users. Almost every company using AI tools today has obligations under Article 4 (AI literacy) which became enforceable in February 2025.

Is this really free? What's the catch?

Free to use, no account required, no credit card. Documents are generated from structured templates — there are no per-document API costs. We may offer premium features (custom branding, multi-system assessments, ongoing monitoring) later. The core compliance toolkit stays free.

Are these documents legally valid?

They are compliant templates written against the final published text of Regulation (EU) 2024/1689. They are not legal advice, and we are not a law firm. You should have a qualified lawyer review and formally adopt them before treating them as your official policy. Most firms find a short legal review of a completed draft costs far less than building documents from scratch.

What if I use multiple AI systems?

Run the questionnaire once for each AI system. The AI Literacy Policy covers your organisation as a whole, so you only need one of those. The Risk Classification Memo, Usage Policy, and Disclosure Policy are each specific to one AI system. It takes about 10 minutes per additional system once you have the first one done.

What are the fines if we ignore this?

For prohibited AI practices: up to €35 million or 7% of global annual turnover, whichever is higher. For violations of other obligations (high-risk systems, transparency): up to €15 million or 3% of turnover. For providing incorrect information to supervisory authorities: up to €7.5 million or 1.5% of turnover. Enforcement began in August 2024 for prohibited practices and February 2025 for AI literacy obligations.

Does a chatbot or writing assistant count as high-risk?

Almost certainly not. General-purpose productivity tools — ChatGPT, Copilot, Gemini, writing assistants, coding helpers, internal Q&A bots — are typically classified as limited-risk. High-risk classification is reserved for AI systems that make or influence decisions in specific sensitive domains like hiring, credit scoring, medical diagnosis, or law enforcement. The questionnaire will confirm this for your specific situation.

Get your compliance documents today

Free, instant, and fully pre-filled. Book the lawyer review after you have something to show them.

Start the questionnaire

20 questions — about 20 minutes