The EU AI Act Isn’t a Ceiling, It’s a Starting Line

I attended a webinar this week that didn’t focus on models or architecture, but on geopolitics.

Hosted by the Copyright Clearance Center and moderated by Roy Kaufman, the session featured Anu Bradford, known for the concept of the Brussels Effect.

The idea is simple, but powerful:

The European Union sets regulation that often becomes a global standard.
Why this matters beyond a deadline
to How do we build above the minimum?

We saw it with GDPR. The argument is we’re seeing it again with the EU AI Act.

A lot of discussion around the 2 August 2026 obligations focuses on readiness deadlines.

What stayed with me from the session was a different framing:

The EU AI Act is not a ceiling. It’s a floor.

That shifts the question from How do we become compliant? 

To How do we build above the minimum?

That feels like a very different motivation - one grounded in trust, not just obligation.


Why this matters more than a compliance deadline

The 2 August 2026 deadline for high-risk AI system obligations is real.


Bradford's framing inverts that completely. The EU AI Act is not a ceiling. It is a floor.

The organisations that treat it as a minimum bar - and then build above it - are the ones that will earn genuine trust in their AI systems.


That is a different kind of motivation than "we have to be ready by August."


What this looks like on the ground

I work with teams deploying Copilot Studio agents and Power Platform solutions in public sector, welfare, and healthcare settings - the exact environments named in Annex III of the EU AI Act as high-risk by default.


These are not abstract categories.


Annex III describes employment decisions, social benefits assessments, access to education, and healthcare. It describes the work of the coordinators, caseworkers, and administrators who will be sitting next to these systems every day.



The five compliance areas every regulated team needs to address before 2 August 2026.


The Brussels Effect means those teams do not get the luxury of waiting to see what Norway decides.


The standard is already set.


If your organisation is deploying an AI system in one of these contexts, you are the deployer under EU law - and the obligations for transparency, human oversight, technical documentation, and post-market monitoring belong to you.


Not to Microsoft. Not to the vendor. To your organisation.


Compliance is the starting point, not the destination

Here is the shift I am taking away from yesterday.

Governance is not something you retrofit. Take this in perspective in the context of individual agent builds - that the Compliance Hub comes before the agent, not after. But what Bradford's framing adds is a longer arc.


The organisations that are building governed AI systems now - before they are legally required to, before auditors are checking, before the public starts asking - are not just compliant.

They are building the kind of institutional trust that the Brussels Effect rewards.

Because once a standard becomes global, the organisations that built to it early look like they were right. Not just careful.


What practitioners should watch next

A few things worth tracking as the EU AI Act continues to roll out:

If you are building with Copilot Studio, Power Automate, or Azure AI in regulated workflows, Microsoft's EU AI Act compliance documentation maps what they are responsible for and - critically - what you are responsible for as the deployer.



The 5 things to check right now:


01 Classification / Know your risk level




02 Governance / Humans must stay in the loop

Every high-risk system needs a named person who can intervene and override.



03 Transparency / People must know they're talking to AI

Article 50 requires disclosure at the point of contact - in plain language.


04 Data & Bias / Your data is not neutral

Training data, bias assessment, and ongoing monitoring are all required.




05 Documentation / If it's not written, it didn't happen


Annex IV technical documentation and conformity assessment before go-live.


A note from my own work This has shaped how I’ve approached my own agent builds. With MICO, a prompt coaching agent I built for healthcare teams, governance was designed as the first layer, through boundaries, prompt guardrails, and human judgment checkpoints. Not because regulation demanded it. Because trust did.


Start with the inventory

You cannot govern what you have not mapped.
Download the full checklist HERE

And if you attended the webinar earlier this week or are thinking through the EU AI Act implications for your own organisation, I would genuinely like to hear where you are in that process.

Connect on Linkedin


Also If you still haven't registered for my session at NMTC yes, tomorrow I will be I'll be walking through MICO - a Prompt and Copilot Coach agent I built for healthcare workers in Microsoft 365 Copilot and showing how ten years of frontline experience shapes the design decisions that matter most in AI. Not the features. The guardrails.


Time: 08:30–09:30 (CEST)· Free · Online via Microsoft Teams (Norwegian)

Register HERE


Resources worth chekcing out:


5. European Parliamentary Research Service — The global reach of the EU's approach to digital transformation (January 2024)

Comments

Popular posts from this blog

I passed AB-730 (Microsoft AI Business professional certification). Here is the Copilot Studio agent I built to do it.

From Social Work to Microsoft Functional Consultant Associate: Governing the Process of a Career Pivot.

Deploying Copilot Studio Kit the Enterprise Way