tools7 min read

How to Use ChatGPT at Work Without Getting Fired

A financial analyst at a major bank pasted confidential earnings data into ChatGPT. A lawyer used it to write a brief and it invented case citations that didn't exist. A Samsung engineer uploaded proprietary source code.

These are cautionary tales you've probably heard. They're real. They're also entirely avoidable if you use the tiniest amount of common sense.

I consult with companies on AI adoption. The number one thing that holds organisations back isn't the technology. It's fear. Fear that employees will do something catastrophically stupid with AI tools. And the number one thing that holds employees back is the same fear, just aimed at themselves.

So let me give you the rules. They're not complicated.

Rule 1: Check your company's AI policy

Before anything else. This is the boring but essential step.

Many companies now have explicit policies about AI use. Some say "go for it." Some say "only use these approved tools." Some say "absolutely not." Some say nothing at all, which is its own kind of problem.

If there's a policy, read it. Follow it. If there isn't one, ask your manager or IT department. Get the answer in writing (email). "I'd like to use ChatGPT/Claude for [specific work task]. Is that within our acceptable use policy?" This protects you. If they say yes, you're covered. If they say no, you've avoided a problem.

If you can't find anyone to give you a clear answer, use your judgment and follow the remaining rules carefully.

Rule 2: Never paste confidential data into public AI tools

This is the big one. The one that gets people fired.

Public AI tools (the free versions of ChatGPT, Claude, etc.) may use your inputs for training. Even if they say they don't, the data still travels through their servers. This means anything you paste in could theoretically be accessed by the AI company or, in a security breach, by others.

Things you should never put into a public AI tool:

  • Customer data (names, emails, financial details)
  • Employee personal information
  • Financial results before they're public
  • Proprietary code or trade secrets
  • Confidential strategy documents
  • Legal documents
  • Anything covered by NDA

Things that are generally fine:

  • Publicly available information
  • Generic templates and frameworks
  • Your own writing that doesn't contain confidential details
  • General questions about processes or concepts

If your company has an enterprise AI subscription (ChatGPT Enterprise, Claude for Business, etc.), the data handling is different and usually safer. But verify this with your IT team rather than assuming.

This topic is covered in detail in AI Proof Your Job: The 30-Day Survival Checklist Get it for $7

Rule 3: Always review the output

AI gets things wrong. Confidently wrong. Impressively wrong.

That lawyer who submitted AI-generated case citations didn't get in trouble for using AI. He got in trouble for not checking the output. The citations didn't exist. He submitted them to a court. The judge was not amused.

Treat AI output as a first draft from a keen but unreliable intern. It might be brilliant. It might be nonsensical. You need to check before you send it anywhere.

This is especially important for:

  • Facts, statistics, and specific claims (AI hallucinates these regularly)
  • Legal or regulatory information
  • Financial calculations
  • Anything that will be seen by clients or customers
  • Anything with your name on it

Rule 4: Be transparent about using AI

This one depends on your workplace culture, but my general advice is: don't hide it.

If you used AI to help draft a report, and someone asks, say so. "I used Claude to create a first draft and then edited it extensively" is a perfectly professional thing to say. In most workplaces in 2026, this is expected behaviour, not a confession.

Hiding your AI use and getting caught feels much worse than being upfront about it. And in some contexts (academic work, regulated industries, client deliverables), there may be disclosure requirements.

The exception: don't be the person who mentions AI in every conversation. "I asked ChatGPT and it said..." as a conversation starter gets old fast. Use the tools. Deliver good work. Let the quality speak for itself.

Rule 5: Don't automate yourself into irrelevance

This is the subtle one.

If AI can do 90% of your job and you tell everyone about it... you've just made a case for eliminating your position. I'm not saying hide your efficiency gains. I'm saying be strategic about how you frame them.

"I've used AI tools to reduce the time I spend on routine reports from 10 hours to 2 hours, which has freed me up to focus on [higher-value work that requires human judgment]" is very different from "I basically just paste everything into ChatGPT now."

The first framing makes you more valuable. The second makes you replaceable.

Use the time AI saves you to do work that AI can't do. Analysis that requires context. Relationships that require empathy. Decisions that require judgment. Strategy that requires experience. That's your job security.

Common scenarios

Writing emails. Fine, as long as you don't paste confidential information into the prompt and you review the output. See our email management guide for more.

Summarising documents. Fine for non-confidential documents. For confidential ones, use your company's enterprise AI tool if available.

Creating presentations. Generally fine. AI presentation tools can save hours.

Data analysis. Be careful. If the data contains personal information or confidential business data, don't upload it to public tools. Enterprise versions are safer, or you can anonymise the data first.

Code and technical work. Check with your IT team. Many companies have specific policies about AI code generation, especially around intellectual property.

Client deliverables. Check your client contracts. Some explicitly prohibit AI-generated content. Others require disclosure. When in doubt, ask.

For more on what tools to use and how to get better results from them, we've covered those separately.

The one thing to do today: find out if your company has an AI use policy. If it does, read it. If it doesn't, email your manager asking about it. That one email protects you and signals that you're thinking about AI responsibly, which is exactly the vibe you want.

Get the 30-Day Checklist — $7

Instant download. 30-day money-back guarantee.

Includes 7 role-specific playbooks, AI glossary, and redundancy rights cheat sheets for US & UK.

Not ready to buy? That’s fine.

Get 3 free tips from the guide. No spam.