Not Technical? Start Here: How Prompting Opens the Door to AI
I don’t have a technical background.
And one post about prompting changed everything.
On Friday 24 April at 08:30 CEST, I’m presenting at NMTC - the Norwegian Microsoft Technology Community - on: From Frontline to Low-Code: Building Governed AI Agents Without a Computer Science Degree.
This talk comes from a real shift in how I think about AI - and who belongs in it.
Register HERE (I will be presenting in Norwegian)
Where this started
I spent ten years working in frontline health and social care before I ever touched low-code tools.
For a long time, I assumed that meant I was behind - that AI and technology belonged to people who had always worked in those spaces.
Then I came across something simple: structure your prompts better.
That was the turning point.
Not because it was complex - but because it wasn’t.
How CGSE came about
I didn’t learn prompting from AI.I learned it from social work.
Inspired by Microsoft’s GCSE structure, I adapted it into something that made sense in my world: CGSE - Context, Goal, Source, Expectations.
Not because I needed a new framework.
But because I needed something that would actually stick.
In social work, you always start with context.
You don’t act without understanding the situation. And you work within routines and boundaries, highly regulated environment and with sensitive data- where processes are designed to protect people.
Routines that are not easy to change.
Now add AI into the mix.
We’re told to experiment, to “learn prompting,” to try new techniques.
But most professionals don’t have time to learn a completely new way of thinking.
So I didn’t.
I used what I already knew.
- Start with context (Why do you need this? Who is involved?)
From there, the rest follows:
- A clear goal (what needs to be produced)
- A defined source (what information is safe to use)
- And expectations (how the output needs to land - especially in sensitive contexts)
What surprised me wasn’t that it worked.
It’s that it felt familiar.
Why this matters
This isn’t about becoming “good at AI.”
It’s about recognising that many of the skills we already use - especially in people-focused, high-stakes professions - are directly transferable.
We don’t need to reinvent how we work.
We ned to adapt what we already do well - within the boundaries and safeguards that define our roles, particularly in regulated environments shaped by GDPR and the EU AI Act.
If you work in social care, healthcare, or any role where safety and clarity matter - you’re probably already closer to effective AI use than you think.
What I’ll share in the NMTC session
The talk is built around a real example: MICO, a Prompt Coach agent I built for healthcare workers using Microsoft 365 Copilot.
MICO doesn’t do the work for users.
It helps them write better prompts and use copilot effectively based on their own scenarios
(with guardrails and boundaries)- so they can get outputs they can actually use.
Read more bout Mico here.
Through that, I’ll cover:
- The CGSE framework- a structured, practical way to prompt that works in real environments
- Guardrail-first design- not just what AI can do, but what it should not do
- A reframe of technical competence- and why frontline experience belongs in AI
A practical next step
To support this approach, I’ve created a two simple guides: For beginner and advanced you can apply in everyday use cases - from email triage to meetings and data insights.
Download It here (Advanced)
Feel free to try it and share your experience. I’m always open to feedback.
Join the session
On Friday 24 April
At 08:30–09:30 CEST
Where: Online (NORWEGIAN)
🎟 Free
Register here: NMTC-REGISTER-LINK
If you’ve ever thought “I’m not technical enough for AI” - this session is for you.
I’d love to see you there.
.



Comments
Post a Comment