The EU AI Act: what your business needs to do before August 2026
I’ve read the EU AI Act. All 458 pages. You shouldn’t have to.
Most of the commentary out there is written for large corporations with dedicated legal teams and compliance departments. If you’re running a 5-person consultancy or a 20-person accounting firm, that advice is borderline useless. So here’s my attempt at translating what actually matters for small businesses.
The short version
The EU AI Act classifies AI systems into risk tiers. Most small business uses — email automation, document drafting, scheduling, data extraction — fall into the “limited risk” or “minimal risk” categories. You’re not building facial recognition systems or making hiring decisions with AI. You’re trying to automate your inbox.
That said, “minimal risk” doesn’t mean “no obligations.” There are still rules, and ignorance won’t protect you when enforcement starts.
When does this actually kick in?
The rollout is staged. Prohibited AI practices (things like social scoring) were banned in February 2025. Transparency requirements for general-purpose AI models hit in August 2025. The big one — full enforcement of high-risk AI system rules — arrives August 2026.
If you’re deploying AI agents for standard business automation, August 2026 is your date. Five months from now.
What “limited risk” means for your business
Most AI tools that small businesses use fall into the limited risk category. The main obligation here is transparency: you need to tell people when they’re interacting with AI-generated content. If your AI agent sends emails on your behalf, the recipient should know an AI drafted that message.
Practically speaking, this means:
- Add a note to AI-generated emails (“This draft was prepared with AI assistance”)
- Label AI-generated documents internally
- Keep a record of where you’re using AI in your workflows
That’s… honestly not that bad. It’s mostly documentation and honesty.
Where small businesses actually get tripped up
The tricky part isn’t the AI Act itself. It’s how it interacts with GDPR, which you’re already supposed to comply with.
When you feed client data into ChatGPT to draft a proposal, you’ve just transferred personal data to a US-based processor. That requires a Data Processing Agreement, a Transfer Impact Assessment, and documentation of your legal basis for the transfer. Most small businesses do none of this. They just paste text into ChatGPT and move on.
The AI Act adds another layer. You now need to document which AI systems you’re using, for what purpose, and what data flows through them. If you’re using five different cloud AI tools with unclear data practices, your compliance surface just got a lot bigger.
Law firms and healthcare providers feel this pressure most acutely. Client confidentiality isn’t optional in those fields — it’s the foundation of the business. I wrote more about what happens to your data when you use cloud AI tools.
Self-hosted AI simplifies compliance (a lot)
This is where I’m biased, and I’ll own that. I set up self-hosted AI agents for a living. But the compliance math genuinely favors self-hosting.
When your AI runs on your own hardware:
- No third-party data processor to manage
- No international data transfers
- No external terms of service that change without warning
- Full audit trail that you control
- The model can’t be retrained on your data because your data never leaves your building
That’s the maximum-privacy option, and it’s ideal for legal practices dealing with attorney-client privilege or medical offices handling patient records. But not every workflow needs that level of isolation. For less sensitive tasks, a cloud AI provider is cheaper and the compliance burden is still manageable with proper documentation. You can also split the difference with a dedicated GPU server running an open-source model — third-party hosted but controlled by you.
The point is: self-hosting simplifies compliance the most, but any setup is compliant if you document it properly. The Act requires that regardless.
What you should actually do right now
Here’s my honest advice, in order of priority:
1. Audit your current AI usage. Write down every AI tool your team uses. ChatGPT, Gemini, Copilot, whatever. Note what data flows through each one. This takes an afternoon, and you’ll probably be surprised by what you find.
2. Check your GDPR compliance first. If you’re not GDPR-compliant with your current AI usage, fix that before worrying about the AI Act. GDPR fines are already being enforced. The AI Act enforcement is still ramping up.
3. Decide what’s sensitive and what isn’t. Not everything needs the same level of protection. Marketing copy? Fine in the cloud. Client financial data? Keep that on your own systems.
4. Consider self-hosting for sensitive workflows. A self-hosted AI agent running OpenClaw costs EUR 30-50/month for a VPS, handles your most sensitive tasks, and eliminates most compliance complexity. Cloud AI works fine for everything else.
5. Document everything. The AI Act rewards transparency. Keep records of what AI you use, why, and what safeguards you have. If an auditor asks, you want to have answers.
What I’d skip
Don’t hire a consultant to write you a 200-page AI governance policy. You’re a small business. A clear 5-page document covering your AI tools, data flows, and safeguards is more useful and more likely to actually be maintained.
Don’t panic about enforcement timelines either. The EU historically gives grace periods for SMBs, and the initial enforcement focus will be on high-risk systems and large companies. But “they’ll probably go after the big guys first” is a terrible compliance strategy. Do the basics now while it’s calm.
If you want to talk through what this means for your specific situation, I do free 30-minute discovery calls. No pitch, just an honest assessment of where you stand and what you should prioritize.
Book a free call. I'll tell you exactly what I'd automate first, what hardware you need, and what the whole thing costs. No surprises.
Book a free call