Earlier this year, I helped an organisation finalise their AI policy.
It took about three months: a working group, two rounds of feedback, a meeting that probably could have been an email, and a final sign-off from the executive team. The document was thorough. The process was genuine. Everyone was proud of it.
Then it went into the shared drive. And nothing really changed.
I’ve seen this pattern more times than I’d like. A policy gets written, often well written, and then quietly shelved. The team goes back to doing what they were already doing: a mix of cautious non-use, enthusiastic shadow use, with staff quietly using AI tools outside any policy or oversight, and genuine confusion about where the lines are.
If that sounds familiar, you’re not alone. I talked about it recently when I sat down with the team at Ticker News to discuss AI governance for small business, and the response told me it had hit a nerve.
Operating in the Grey Zone
What I said in that interview, and what I believe more strongly every month, is that many Australian businesses are operating in the grey zone. Not because they’re careless, but because the conversation hasn’t happened yet.
A 2025 University of Melbourne and KPMG study of 48,000 people across 47 countries found that 59% of Australian employees are already using AI at work, but governance and training to support responsible use is lagging well behind.
So there’s a strange situation playing out in workplaces right now. AI is being used. Policies are being written. And the two aren’t quite connecting.
What a Policy Actually Needs to Cover
A policy can cover all the right ground, including data privacy, acceptable use, accountability and approved tools, and still fail if it doesn’t connect to how work actually gets done.
The policies that stick are the ones that keep it simple. When I spoke with Ticker News, I talked about four things every AI policy needs:
- Scope – who does it apply to?
- Tools – what’s approved, and what isn’t?
- Boundaries – what data is strictly off-limits?
- Accountability – who is responsible for the final output?
That’s it. Not a 50-page legal document. A clear, practical starting point your team can actually use.
From Policy to Practice
The gap I keep seeing isn’t in the quality of the policy. It’s in the step that comes after.
Writing the policy is the beginning of the work, not the end of it. The real work is helping your team understand what it means for their specific role, on a specific Tuesday afternoon, when they’re deciding whether to paste a client email into ChatGPT to draft a reply.
That’s where training comes in. Not a one-off here’s how to use AI session, but practical, policy-linked training that gives your team the confidence to make good decisions in the moments that matter. It’s the difference between handing someone a road map and teaching them to drive.
Where to Start
If your business is still in the grey zone, with no policy, no real conversation and just quiet experimentation, the first step doesn’t have to be a three-month process.
Start with the four questions above. Talk to your team about what they’re already using. Be honest about the risks, and equally honest about the opportunity. If you’d like to go deeper on AI policy, I’ve written about Why an AI Policy is Different to Your IT Policy and The 3 Critical Questions to Ask When Developing an AI Policy.
About the Author
Nat Bell is the founder of Bell Training Solutions, providing consulting and training services that help organisations build practical, effective learning and development strategies. She supports leaders and teams with clear frameworks, policy guidance and capability development so training initiatives lead to real behaviour change and practical outcomes.
And if you’d like help turning that conversation into something practical – a policy that actually gets used, and training that makes it stick – that’s exactly what I do. Contact Nat Bell here
