What it is

Shadow AI is when employees use AI tools at work without their company's knowledge or approval. It's the AI equivalent of shadow IT — remember when everyone started using Dropbox before the IT department said they could? Same energy, bigger consequences.

It's happening everywhere. People are quietly pasting client data into ChatGPT to write reports faster, running meeting notes through AI summarisers, or using image generators for presentations. Most of them aren't being malicious — they're just trying to get their work done more efficiently. But they're doing it outside any official policy, which means nobody's tracking what data is going where, what's being stored, or what might end up in a training dataset.

Why it matters for your job

This one cuts both ways. If you're using AI tools without telling anyone, you might be brilliant at your job right now — but you're also one data breach away from a very uncomfortable conversation with HR. Companies are starting to crack down, and "I didn't know I couldn't use it" won't fly as an excuse.

On the flip side, shadow AI reveals something important: official tools and processes aren't keeping up with what people actually need. If half your team is secretly using ChatGPT, that's a sign your company needs an AI policy, not a witch hunt. The people who get caught using shadow AI responsibly often end up being asked to help shape the official approach — which is a much better career outcome than a disciplinary.

What to do about it

If you're using AI tools unofficially, start pushing for a proper policy instead. Volunteer to help draft guidelines, suggest a pilot programme, or simply ask your manager what the rules are. Being the person who brings shadow AI into the light is far better than being the person who gets caught with client data in a chatbot's training set.

This glossary is part of the full guide, along with role-specific playbooks and redundancy rights cheat sheets See what’s inside