The EU AI Act deadline is 4 months away. Does your AI agent pass?

Linas Valiukas By Linas Valiukas
EU AI Act AI agents compliance OpenClaw GDPR European businesses SMBs

On August 2nd, the EU AI Act’s remaining provisions become enforceable. Every AI system operating in Europe will need to meet its risk tier’s requirements — or the company deploying it faces fines up to €35 million or 7% of global annual revenue, whichever is higher.

I wrote a general overview of the AI Act for small businesses back in March. That post covers the basics — risk tiers, what “limited risk” means, why GDPR matters more than you think. Go read it if you haven’t.

This post is about something more specific: AI agents. Not ChatGPT in a browser tab. I’m talking about systems like OpenClaw, Claude Code, and the growing wave of autonomous AI tools that send emails on your behalf, manage your calendar, handle client communications, execute code, and make decisions without asking you first. These raise compliance questions that a simple chatbot doesn’t.

And 88% of organizations using AI lack readiness for what’s coming in August.

Why agents are different from chatbots

When you ask ChatGPT to draft an email, you read the draft, maybe edit it, and hit send yourself. You’re the human in the loop. The AI is a tool. You maintain control.

An AI agent is different. You tell OpenClaw “check my inbox every morning, summarize anything urgent, and send follow-up reminders to clients who haven’t responded in 3 days.” Then you go to work. The agent reads your email, interprets urgency, writes messages, and sends them — potentially to dozens of people — without you reviewing each one.

That difference matters enormously under the AI Act.

Article 14 requires that high-risk AI systems be designed so humans can “effectively oversee” them — understand their capabilities and limitations, interpret their output, decide not to use them, or stop them. For deployers, that means assigning oversight to people with the “necessary competence, training and authority.”

An agent running in the background, autonomously sending emails to your clients at 6am, is hard to square with “effective human oversight.” Not impossible. But you’d better have documented how oversight actually works in your setup.

Where most AI agents land in the risk tiers

Quick refresher. The AI Act sorts systems into four buckets:

Unacceptable risk — banned outright. Social scoring, manipulative AI, most real-time biometric surveillance. Your small business isn’t doing this.

High riskeight categories in Annex III: biometrics, critical infrastructure, education, employment, essential services access (credit scoring, benefits), law enforcement, migration, justice. If your AI agent makes hiring decisions, scores creditworthiness, or triages patient intake, you’re in high-risk territory. Full conformity assessment, technical documentation, risk management systems, human oversight protocols, the works.

Limited risk — transparency obligations under Article 50. You must tell people when they’re interacting with AI. AI-generated content must be labeled. This is where most business AI agents land.

Minimal risk — no specific obligations beyond general product safety law.

Here’s where it gets tricky for agents. A chatbot that helps you draft internal memos? Minimal risk, probably. An agent that autonomously sends client-facing emails, manages appointments, and handles billing reminders? That’s interacting directly with natural persons on your behalf. Article 50 kicks in: those people need to know they’re dealing with AI, “at the latest at the time of the first interaction.”

And if your agent handles anything adjacent to employment decisions, credit, or access to essential services — even indirectly — you might be closer to high-risk than you think. An AI agent that triages job applications for your 15-person firm? High risk. An agent that decides which patients get priority scheduling at your dental practice? Potentially high risk.

The lines aren’t always obvious.

Article 50: what “tell people it’s AI” actually means

The transparency requirements under Article 50 sound simple. They’re not, if you’re running an agent.

Four obligations:

1. Disclose AI interactions. If your AI agent talks to a person, that person must know it’s AI. Exception: when it’s “obvious from the point of view of a reasonably well-informed, observant and circumspect” person. A chatbot widget on your website with a robot avatar? Probably obvious. An AI agent sending polished emails from your business address with your signature? Not obvious at all.

2. Mark synthetic content. AI-generated text, audio, images, and video must be marked in a machine-readable format and be “detectable as artificially generated or manipulated.” The European Commission’s draft Code of Practice on AI labeling is still being finalized. But the obligation exists regardless.

3. Disclose emotion recognition. If you’re using AI to read emotional states (some customer service tools do this), people need to know. Relevant for restaurant groups or consulting firms using AI-powered sentiment analysis on calls.

4. Label deepfakes. AI-generated or manipulated media must be disclosed. Less relevant for most SMBs, unless you’re generating marketing videos with AI voices or synthetic presenters.

For a business running OpenClaw to handle client emails, the first two are the ones that bite. Every email your agent sends needs some form of AI disclosure. Every document it generates needs to be identifiable as AI-produced. And that needs to happen at the machine-readable level too — not just a line of text at the bottom.

How you implement this is still somewhat open. The Code of Practice is in draft form. But “we’re waiting for the final guidance” isn’t a defense if enforcement starts August 2nd and your agent has been sending unlabeled AI emails for months.

The OpenClaw problem

I need to talk about OpenClaw specifically because it’s the most popular open-source AI agent right now — 250,000+ GitHub stars, the fastest-growing open-source project in history — and it has a compliance profile that should worry any European business owner.

The security crisis alone is disqualifying for most regulated use cases. Between January and March 2026:

China’s experience is instructive. The “raise a lobster” craze saw hundreds of people lining up at Tencent’s Shenzhen headquarters to get OpenClaw installed. Local governments offered grants up to $1.4 million for companies building on it. Then agents started leaking company financials and personal data. China’s Ministry of State Security issued formal warnings. Users who’d paid engineers $72 for installation started paying more for uninstallation.

Now think about this through the AI Act lens. Article 14 requires high-risk systems to have safeguards for “accuracy, robustness, and cybersecurity.” Even for limited-risk deployments, GDPR requires “appropriate technical and organizational measures” to protect personal data. An AI agent with 512 known vulnerabilities, running on a platform where one in five marketplace skills was malicious, doesn’t meet that bar.

OpenClaw has patched aggressively — version 2026.4.2 centralized HTTP auth, hardened provider routing, and added durable Task Flows with managed state. The security posture is genuinely better than it was in February. But the governance question remains open. Creator Peter Steinberger joined OpenAI in February and OpenClaw is moving to a foundation, but governance structure and additional maintainers haven’t been announced. For compliance purposes, you’re depending on a project in institutional transition.

None of this means you can’t use OpenClaw. It means you need to be deliberate about how, and document your risk assessment.

The deployer trap

The AI Act distinguishes between providers (who build AI systems) and deployers (who use them in their business). If you install OpenClaw or Claude Code to automate your operations, you’re a deployer. The obligations are different but real.

Deployers of high-risk systems must:

  • Follow the provider’s usage instructions
  • Assign human oversight to competent, trained people
  • Monitor system performance continuously
  • Report incidents without delay
  • Retain system logs
  • Conduct fundamental rights impact assessments for sensitive cases (credit decisions, insurance, public services)
  • Complete Data Protection Impact Assessments where GDPR applies

Even for limited-risk systems, deployers have transparency obligations and GDPR duties. The “I just installed it and let it run” defense doesn’t fly.

This is where small businesses get caught. A 10-person accounting firm sets up an AI agent to handle client communication. The agent runs for months. Nobody’s monitoring its outputs systematically. Nobody’s documented the data flows. Nobody’s assigned formal oversight responsibility. Then something goes wrong — the agent sends confidential financial data to the wrong client, or a prompt injection attack causes it to exfiltrate information — and there’s no incident response protocol, no logs, no documentation.

That’s not a hypothetical. That’s what I see in nearly every small business running AI agents without professional setup.

The Digital Omnibus wildcard

There’s a chance the August deadline slips. The European Commission proposed a “Digital Omnibus” package in late 2025 that could push high-risk obligations to December 2027 — but only for Annex III systems, only if harmonized standards aren’t ready, and only if the European Parliament approves it.

That’s a lot of “only ifs.”

The Commission also proposed reducing administrative burden by 25% overall and 35% for SMEs by 2029. Good intentions. But 2029 isn’t 2026, and good intentions don’t pause enforcement.

My advice: treat August 2nd as the deadline. If the Omnibus passes and buys you 16 months, great — you’ll be ahead of everyone who waited. If it doesn’t pass, you’re not scrambling in July.

What you should do in the next 4 months

I’m going to be more specific than my previous post because agents require specific steps that generic AI compliance checklists miss.

Month 1: Inventory your agents and their permissions.

Write down every AI agent running in your business. Not just the ones you set up — ask your team what they’ve installed on their own. 54% of small business owners are using AI tools now, and plenty of those are unsanctioned agent installations nobody in management knows about.

For each agent, document:

  • What it does (email, scheduling, document generation, etc.)
  • What data it accesses (client data, financial records, HR files)
  • What actions it takes autonomously (sending messages, modifying files, making decisions)
  • What model it connects to and where that model runs
  • Who set it up and who currently maintains it

This inventory is the foundation. Everything else depends on it.

Month 2: Classify risk and fix your transparency gap.

Map each agent to a risk tier. Most will be limited risk. Some might be high risk if they touch employment, credit, or health decisions — even tangentially. An agent that decides appointment priority at a healthcare practice might qualify.

Then fix your transparency compliance. Every client-facing AI interaction needs disclosure. Practically:

  • Add AI disclosure to email signatures used by agents (“This message was drafted with AI assistance and reviewed by [your team/your name]”)
  • Add visible AI labels to generated documents
  • Update your website if a chatbot or agent interacts with visitors
  • Document your labeling approach for machine-readable content marking

Month 3: Set up oversight and incident response.

Assign a specific person as the human overseer for each agent. That person needs to understand what the agent does, review its outputs regularly, and have the authority to shut it down. For a 10-person business, this is probably the owner or office manager. The key is that it’s documented and the person actually does it — not just a name on paper.

Write a one-page incident response plan. If an agent sends wrong information to a client, leaks data, or gets compromised by prompt injection, what happens? Who’s notified? What’s the first action? This doesn’t need to be elaborate. It needs to exist.

Month 4: Document everything and run a test.

Pull it all together into one document. Your AI inventory, risk classifications, transparency measures, oversight assignments, incident response plan. One document. Not 200 pages — aim for 10-15.

Then run a test. Have someone on your team try to find out what AI systems your business uses, who oversees them, and what happens if something goes wrong. If they can’t answer those questions from your documentation, neither can a regulator.

The cost question

Enterprise compliance estimates run $500K to $2M for SMEs initially. That number is terrifying if you’re a 15-person business. It’s also wildly inflated for what most SMBs actually need.

If your agents handle limited-risk tasks (email automation, scheduling, document drafting), your compliance cost is mostly your time doing the inventory and documentation, plus maybe a few hours with a consultant to sanity-check your risk classifications.

If you’re in or near high-risk territory — employment decisions, healthcare triage, financial assessments — you need professional help. That’s not optional, and the cost depends on complexity. But it’s nothing close to $500K unless you’re running dozens of high-risk systems.

The most expensive option? Doing nothing and hoping enforcement targets the big companies first. That was a defensible bet in 2025. With 4 months left, it’s a gamble.

Self-hosting changes the compliance math

I said this in the previous post and I’ll say it again: self-hosted AI agents simplify compliance dramatically. When your agent runs on your own hardware, there’s no third-party data processor to manage, no international data transfer, no surprise Terms of Service changes.

That matters more for agents than for chatbots. A chatbot processes a query and returns a response. An agent has persistent access to your email, your files, your calendar, your client database. The data exposure surface is massive. Keeping that on infrastructure you control — whether a physical machine in your office or a European VPS — eliminates entire categories of compliance risk.

It won’t solve everything. You still need transparency labeling, human oversight, documentation. But you won’t be scrambling to produce Data Processing Agreements and Transfer Impact Assessments for a US-based AI provider that processes your client data on servers in Virginia.

I wrote more about the data privacy tradeoffs between cloud, self-hosted, and GPU server options and the cost breakdown for running AI agents under €50/month.

The bigger picture

The AI Act isn’t trying to stop you from using AI. It’s trying to make sure that when AI systems affect people’s lives — make decisions about them, communicate with them, process their data — there’s accountability. Someone is responsible. Someone is watching. Someone can explain what happened.

That’s a reasonable ask. And for most small businesses, meeting it isn’t technically hard. It’s just… boring. Documentation. Transparency labels. Oversight assignments. Incident response plans. Nobody starts a business because they love compliance paperwork.

But August 2nd is coming. Over half of organizations lack even a basic AI system inventory. If you spend one afternoon this week writing down every AI tool your business uses and what data flows through each one, you’ll already be ahead of most of your competitors.

That’s not a bad place to start.


If you want help auditing your AI agent setup for EU AI Act compliance, I offer free 30-minute discovery calls. No sales pitch — just an honest look at where you stand and what needs attention before August.

Book a free call. I'll tell you exactly what I'd automate first, what hardware you need, and what the whole thing costs. No surprises.

Book a free call