Back to blog

AI Automation

The EU AI Act Doesn't Care What Tool You Use. It Cares What You Do With It.

The EU AI Act doesn't ask what AI tool you use. It asks what you're doing with it — and most companies are in the wrong tier.

·5 min read
Share
The EU AI Act Doesn't Care What Tool You Use. It Cares What You Do With It.

The regulation most companies are misreading

Most companies catalogue their AI use by product. Which tools they've subscribed to, which ones the team has started using, which integrations are live. It feels like a reasonable compliance approach... until you realise the EU AI Act doesn't work that way.

The Act doesn't ask what tool you're using. It asks what you're doing with it.

That single distinction is where most AI compliance strategies fall apart. And with full enforcement of high-risk requirements landing in August 2026, the gap between "we use AI responsibly" and actually knowing your regulatory tier is closing fast.

Four tiers, one principle

The EU AI Act organises AI systems into four risk tiers: prohibited, high-risk, limited-risk, and minimal-risk. The prohibited category covers the expected extremes... social scoring systems, tools that manipulate behaviour without user knowledge, facial recognition scraped from public cameras. Banned outright as of February this year.

The three tiers below that are where most businesses actually live.

Minimal risk covers the majority of everyday AI use. Drafting emails. Summarising documents. Writing blog posts. The regulation has nothing to say here. No compliance obligations, no registration requirements.

Limited risk adds one main obligation: transparency. If someone is interacting with an AI system... a chatbot on your website, an AI-generated image... they should know it. Disclosure, not permission. Straightforward in theory; inconsistently applied in practice.

High risk is where obligations become substantial. Conformity assessments before deployment. Documented human oversight. Ongoing monitoring. Registration in an EU database. The categories the Act specifies are precise: employment and HR systems, credit scoring, access to education and healthcare, critical infrastructure, law enforcement, legal proceedings. What connects them is consequence... these are decisions that materially affect someone's job prospects, financial access, or health outcomes.

Prohibited carries fines up to €35 million or 7% of global turnover. No deployment, no exceptions.

One company. Three products. Three different regulatory positions.

Here's where the use-case logic plays out in practice.

Take a mid-sized company using AI across a few teams. Their marketing team uses an AI writing tool to draft copy and run competitive research. Minimal risk. Nothing to see here.

Their customer service team runs a chatbot handling first-line support queries. Limited risk. They need to tell users they're talking to an AI... something most well-built products now do by default, though how clearly varies.

Their HR team uses an AI tool to screen CVs and rank candidates before a human reviewer ever sees them. High risk. The Act explicitly lists employment-related AI in its high-risk category. The obligations that follow... documentation, transparency to affected individuals, human oversight, conformity assessment... are substantial.

Same company. Same broad position of "we use AI tools." Three completely different places in the regulation. The person who bought the enterprise subscription may not know that the third use case moved the whole organisation into a different tier.

Credit and healthcare follow the same pattern. A bank using AI to determine loan eligibility or interest rates is operating in high-risk territory... not because the model is sophisticated, but because the output affects something fundamental about a person's financial life. A diagnostic support tool in healthcare faces real scrutiny. A wellness app suggesting hydration habits does not. The line the regulation draws is consequence, not industry.

The deployer question most businesses aren't asking

The regulation separates providers... the companies that build AI systems... from deployers... the organisations that put them to work. Most public compliance conversation focuses on the providers: OpenAI, Google, Anthropic, the frontier labs.

But deployers carry real obligations too, particularly in high-risk categories. Buying a compliant product doesn't transfer compliance. How the tool is deployed, what human oversight actually exists, whether individuals know AI was involved in decisions about them... those questions sit with the organisation doing the deploying.

A recruiter using an off-the-shelf CV screening tool doesn't inherit compliance by virtue of the vendor's certifications. The deployer owns the implementation. That's the assumption worth pressure-testing before August 2026.

Most organisations catalogue AI use by subscription and tool name. The Act catalogues by task, data, decision, and who gets affected by the output. Those two catalogues rarely match. The gap between them is where the regulation will produce surprises.

If you want to map your AI use cases against the correct tiers before enforcement kicks in, get in touch. We work with operations and compliance teams doing exactly this.

Originally published on Substack. Subscribe for weekly insights.

Originally published on SubstackRead original →
Share
EU AI ActAI ComplianceAI RegulationRisk Management

More articles

The EU AI Act Doesn't Care What Tool You Use. It Cares What You Do With It.
AI Automation··5 min

The EU AI Act Doesn't Care What Tool You Use. It Cares What You Do With It.

The EU AI Act doesn't ask what AI tool you use. It asks what you're doing with it — and most companies are in the wrong tier.

EU AI ActAI ComplianceAI Regulation
Read

Newsletter

Weekly insights on AI automation

Practical strategies for B2B companies adopting AI — no buzzwords, no hype.