Question 1: Who is the policy protecting?
Last week, I ran a training session with some community college leaders in New South Wales. Many had AI policies, but there was a huge lightbulb moment when we realised these policies only covered staff and not students. This showed us that a truly useful AI policy isn’t just for employees; it needs to protect everyone who interacts with your organisation’s systems and data.
For community colleges, that includes students. For your business, that could mean customers, contractors, or even the suppliers you rely on. It’s essential to expand your thinking to include all of your stakeholders.
Question 2: How do you manage ‘shadow AI’ and the tools you can’t see?
It’s not practical to ban every new tool that emerges. People will find workarounds, and that’s the messy reality of shadow AI. Instead, your policy should act as a guide that they actually refer to. Provide clear guardrails about which tools can be used for what purpose, or which ones are off-limits. The best tools will work seamlessly with your existing infrastructure and environment.
Give people a clear, simple process for evaluating and introducing new AI tools. The technology is moving fast, and tools that may not have been relevant 6 months ago, may now be crucial to your workflow. Maybe that’s a process for requesting a specific tool, a pilot program, or a designated team to evaluate them. An actionable policy encourages transparency and makes smart decisions the easiest choice.
Question 3: How do you make sure that people know what to do?
A policy is useless if it sits in a file that no one reads. To be effective, it needs to be a living guide that’s easy to access and understand. Think about different formats: a one-page checklist, a flowchart, or a quick-start guide.
Once the policy is drafted, how will you get people to actually use it? A short workshop, a lunch-and-learn, or perhaps an open discussion forum will work for your team. It is important to spend time having the difficult conversations, people have a lot of fear around AI and it is up to us humans to create a supportive environment where people feel included and informed. I am also a big fan of working with champions or a small pilot group to test and embed new practices.
Want to see if your organisation is really ready for AI? – Contact Nat Bell for more
Nat Bell is a Melbourne-based corporate trainer and facilitator with over 20 years’ experience across Australia, Asia and Europe. Her specialty is designing and delivering practical, engaging training, especially around systems, technology adoption, soft skills, and AI with a style focused on clarity and “sticky” learning outcomes (i.e. making ideas memorable and applicable).
