How I Built a Copilot Prompt Coach Agent for Healthcare Teams Using Agent Builder

Now that most organizations have access to Microsoft 365 Copilot Chat, something interesting is happening: you can build simple AI agents without coding, Copilot Studio, or expensive tools.

But in healthcare and public services, the challenge isn’t building agents. It’s building trust.

After 10 years on the frontline supporting children and families, I know how quickly tools fail when they don’t fit reality:

• Staff are busy

• Documentation is heavy

• Risk of sharing sensitive information is constant

So I built MICO,  and in this post I'll walk through exactly how it works, what design decisions I made, and why the governance layer matters more than the features.



Who Is MICO?

MICO (Miljøarbeidtjenesten's Copilot Opplæringsagent), an agent built in Agent Builder using Microsoft 365 Copilot Lite versions, grounded in Microsoft’s best practices, the kommunes guardrails, GDPR compliance, Responsible AI principles, and real-world healthcare scenarios.

Not a chatbot, but a Copilot coach designed to make prompting and Copilot use easier, safer, and adapted to the daily bustle of frontline work.


Three problems it solves directly:

1. No time to learn prompting. Frontline staff can't spend hours on prompt engineering theory. MICO coaches them in the moment, inside the tool they're already using.

2. Fear of data exposure. Valid anxiety around PII and GDPR means staff often avoid AI entirely rather than risk a breach. MICO enforces responsible AI guardrails automatically, a safety net that works even when no one is watching.

3. Generic, irrelevant output. Without structure, Copilot produces responses that don't fit regulated care contexts. MICO teaches the CGSE framework so every prompt is purposeful before it's sent.



What MICO is not: It does not access patient records. It does not make clinical decisions. It does not operate outside the Microsoft 365 security boundary. That boundary is the point.

The CGSE Framework

MICO moves adoption beyond the blank prompt box by teaching a four-part structure for every interaction:

→ Context - Define who you are and why you're asking. ("I am a Social Educator drafting a summary for a steering group.")

→ Goal -State exactly what response you need. ("Summarize these observation notes into key themes.")

→ Source -Point Copilot to specific, known data using / for files or # for meetings. This keeps the agent grounded in approved content, not the open web.

→ Expectations -Define the format, tone, and length. ("Warm, short, and supportive tone. 5 bullet points maximum.")


A prompt built on CGSE produces output a professional can actually use - not something they need to rewrite entirely before it's safe to act on.

Responsible AI Guardrails in Practice

This is the part of the build that took the most time -and it's the most important.

Before MICO answers any question involving clinical information, it triggers a safety message:

→ AI should not replace professional judgment

→ Do not use AI output to inform patients directly

→ Always verify against approved medical or organizational sources

MICO does not simply retrieve a definition. It first surfaces the guardrail message, then provides safe educational context intended for the professional's own understanding - with the explicit note that the professional, not the agent, is the decision-maker.

This is what Human-in-the-Loop looks like in practice. Not a feature toggle. A design principle.




Why Agent Builder, Not Full Copilot Studio

Why Not a Template or a Generic Agent?

Two options were already available. I didn't use either.

Prompt templates are passive. They give someone a better starting point but don't explain why the structure works and don't give feedback when it's ignored. My colleague didn't need a better prompt ,she needed to understand what made a prompt good.

A generic prompt agent carries no context about the environment it's operating in. No clinical boundaries, no understanding of what "approved sources" means in a Norwegian care setting, no guardrails that fire before a clinical question gets answered.

MICO was built with Agent Builder, available within any Microsoft 365 Copilot license at no extra cost. No code, no additional licenses, no separate security configuration. It inherits Purview sensitivity labels automatically and stays entirely within the existing tenant boundary.

MICO was built because the gap wasn't the prompt, it was the understanding, the safety layer, and the feedback loop. 

For organizations at the adoption stage, this is the right starting point. Build trust first. Scale the infrastructure second.


Guardrails, Feedback, and a Library That Learns

Before MICO answers anything clinical, it fires a safety message: AI does not replace professional judgment; do not share output directly with patients; verify against approved sources. Only then does it respond, framed as educational context for the professional's own use.

Beyond that single interaction, MICO gives feedback on each prompt sent, surfaces daily prompting tips inside the tool staff already use, and offers short optional challenges built around realistic care scenarios. The goal is to make the agent unnecessary over time.

The team also has access to a shared prompt library updated based on what actually works in practice. In a regulated environment, a prompt that has been reviewed and tested carries different weight than one written from scratch under pressure. When prompting skill lives in a shared, maintained library, it stays with the team even when individuals move on.


What I Learned

A colleague told me she kept "fighting with Copilot." I built MICO the same week.

A tool that knows its boundaries earns trust faster than one that claims to do everything. Govern the process, not just the output, that principle applies to learning systems, care systems, and AI systems equally.

For the enterprise governance layer that sits above what MICO operates within, see my earlier post on the Copilot Control System → The Copilot Control System


If you're planning a Copilot rollout for a clinical or public sector team and want to discuss adoption strategy, governance design, or agent architecture, I'd welcome the conversation.

→ Connect with me on Linkendin








Comments

Popular posts from this blog

I passed AB-730 (Microsoft AI Business professional certification). Here is the Copilot Studio agent I built to do it.

From Social Work to Microsoft Functional Consultant Associate: Governing the Process of a Career Pivot.

Deploying Copilot Studio Kit the Enterprise Way