← All RolesMedium Exposure

AI and Cybersecurity Analysts: What's Actually Happening and What to Do

The honest assessment

Cybersecurity is in a peculiar position with AI. It's being transformed by it on both sides of the equation. Attackers are using AI to create more sophisticated phishing campaigns, generate polymorphic malware, and automate reconnaissance. Defenders are using AI to detect anomalies, correlate threat intelligence, and respond to incidents faster. It's an arms race, and it's accelerating.

On the defensive side, AI is already embedded in most modern security tools. CrowdStrike's Falcon platform uses AI for endpoint detection and response. Darktrace uses unsupervised machine learning to identify unusual network behaviour. Microsoft's Sentinel SIEM uses AI to correlate security events across an enterprise. Palo Alto Networks, Splunk, and SentinelOne all have AI at their core. These aren't future developments. They're the current state of the industry. A security operations centre without AI assistance is already at a significant disadvantage.

What AI handles well in cybersecurity: log analysis at scale (parsing millions of events that no human could read), initial triage of security alerts (reducing the false positive burden that burns out SOC analysts), malware classification, vulnerability prioritisation based on exploitability and asset criticality, and automated responses to known threat patterns. IBM's 2024 Cost of a Data Breach report found that organisations using security AI extensively saved an average of $2.2 million per breach and detected breaches 108 days faster.

What AI doesn't handle well: understanding the business context of a security decision. Determining whether an unusual access pattern is a genuine threat or a legitimate business need. Investigating a sophisticated attack that uses novel techniques. Communicating a security incident to non-technical executives. Making the judgement call about whether to shut down a production system during a potential breach. The strategic, investigative, and communicative elements of cybersecurity are where humans remain essential.

Your exposure level: Medium

Medium. And this one comes with an important asterisk.

If you're a tier-1 SOC analyst whose primary job is monitoring dashboards and triaging alerts, your exposure is higher than medium. That's the layer of cybersecurity work most directly automated by AI. SOAR (Security Orchestration, Automation and Response) platforms are explicitly designed to handle the work that junior SOC analysts do. Gartner has noted that AI-driven automation is reshaping the traditional SOC staffing model, reducing the need for tier-1 analysts while increasing demand for tier-2 and tier-3 specialists.

If you're doing threat hunting, incident response, penetration testing, security architecture, or GRC (governance, risk, and compliance) work, your exposure is lower. These roles require creative thinking, business context understanding, and the ability to deal with novel situations — exactly the areas where AI assists but doesn't replace.

The other factor that keeps cybersecurity at medium rather than high is demand. The global cybersecurity workforce gap stands at roughly 3.4 million unfilled positions according to ISC2. The UK alone has an estimated shortfall of over 11,000 cybersecurity professionals. When demand massively outstrips supply, AI tends to augment rather than replace. Organisations need more security capability, not less, and AI is a way to get more capability out of the people they have. The threat isn't joblessness — it's role evolution. The skills that mattered three years ago aren't the same skills that'll matter three years from now.

The 90-day action plan

  1. This week: use AI to accelerate your current work. Try using ChatGPT or Claude to help analyse log snippets, write detection rules, draft incident reports, or explain a CVE in business terms. If you're writing YARA rules or Sigma rules, use AI to help generate first drafts. See where it saves you time and where it hallucinates rubbish.

  2. Week two: understand the AI tools in your security stack. Most security platforms you use daily have AI features you might not be fully utilising. Dig into the AI-powered capabilities of your SIEM, EDR, and vulnerability management tools. Understanding how these work — including their limitations — makes you a better analyst.

  3. By day 30: develop a threat hunting practice. If you're not already doing proactive threat hunting, start. Formulate hypotheses about potential threats to your organisation, use AI tools to help gather and correlate data, and investigate. This is the creative, human-driven side of security that AI supports but can't initiate. Document your hunts and findings.

  4. By day 45: build your incident response depth. Volunteer for incident response exercises. If your organisation does tabletop exercises, participate actively. If it doesn't, suggest one. The ability to lead or contribute to a complex incident response under pressure is a distinctly human skill that compounds with experience.

  5. By day 60: learn how AI is being used by attackers. Study AI-generated phishing campaigns, deepfake social engineering, and AI-assisted exploit development. Understanding the offensive applications of AI makes you a better defender. This is knowledge that differentiates a cybersecurity professional from a dashboard watcher.

  6. By day 75: pick a specialisation to deepen. Cloud security. Application security. OT/ICS security. Threat intelligence. Security architecture. The generalist SOC analyst role is the most automatable. The specialist roles are where the career growth and the AI resilience live. Start pursuing a certification or hands-on project in your chosen area.

  7. By day 90: position yourself for the evolved security team. Have a conversation about where you want to go. "AI is handling more of the tier-1 triage work, which is great. I've been developing my skills in [threat hunting / incident response / cloud security / security architecture] and I'd like to move into a role that focuses there. Here's what I've done to prepare." You're proposing your evolution, not waiting for it to be imposed on you.

The full playbook is in AI Proof Your Job, including specific tool recommendations and a step-by-step 30-day plan Get it for $7

AI tools you should be using this week

  • ChatGPT for Work — Use it to quickly explain vulnerabilities and CVEs in business language for reports to management. Also useful for generating first drafts of detection rules, analysing code snippets for security issues, and brainstorming attack scenarios during threat modelling sessions. It's a surprisingly effective sparring partner for thinking through attack vectors.

  • Claude for Work — Claude handles long, complex technical documents well. Paste in a security policy and ask it to identify gaps. Give it a log excerpt and ask for analysis. Use it to help write post-incident reports that need to be both technically accurate and readable by executives. Its tendency to be methodical makes it particularly good for security work.

  • Microsoft Copilot for Work — If you're in a Microsoft environment, Copilot integrates with the security stack. Microsoft Security Copilot specifically helps with incident investigation, threat intelligence summary, and KQL query generation for Sentinel. Even the general Copilot helps with the reporting and documentation burden that takes up a surprising amount of security analysts' time.

  • Perplexity for Research — When you need to quickly research a new threat, vulnerability, or attack technique, Perplexity can search current sources and provide referenced information. Useful for staying current with the rapidly evolving threat landscape without spending hours on manual research.

What to say in meetings

When leadership discusses AI security tools: "These tools are essential for keeping pace with the volume of threats we face. What i'd recommend is that we think about how we restructure the team to take advantage of the time AI frees up. More threat hunting, more proactive security assessments, more purple teaming. The AI handles the volume. We handle the complexity."

If colleagues worry about SOC automation: "The alert triage grind is going to be increasingly automated, yes. That's honestly a good thing — it's the work that burns people out. The opportunity is to move up the stack into threat hunting, incident response, and security engineering. Those roles need people, and they're more interesting work."

In performance reviews: "I've been using AI tools to improve my efficiency in [specific area] and developing my skills in [specialisation]. Here are the threat hunts I've conducted and the findings. Here's how I contributed to incident response. I want to continue moving into the areas where human expertise matters most."

If the worst happens

If you're made redundant from a cybersecurity role — which, given the talent shortage, would be unusual — your transferable skills are in very high demand. You can think adversarially and analytically. You understand complex systems. You can investigate problems methodically. You can communicate technical risk to non-technical stakeholders. These skills transfer to risk management, compliance, IT consulting, fraud investigation, and business continuity roles.

Adjacent roles to consider: security consultant (the freelance cybersecurity market is strong), GRC analyst, IT risk manager, cloud security architect, DevSecOps engineer, security product specialist at a vendor, or security awareness training specialist. Many cybersecurity professionals also transition into broader IT leadership roles because security experience provides a comprehensive understanding of technology infrastructure.

i want to be straight with you. Cybersecurity is one of the better places to be during the AI transformation. The fundamental dynamic — that there are more threats than there are people to defend against them — isn't changing. AI makes each defender more effective, but it also makes each attacker more capable. The result is ongoing demand for skilled humans who can think creatively about security problems. Your job isn't at risk. But the specific tasks you do today might be very different from the tasks you'll do in three years. Lean into the change rather than clinging to the current version of the role.

Get the 30-Day Checklist — $7

Instant download. 30-day money-back guarantee.

Includes 7 role-specific playbooks, AI glossary, and redundancy rights cheat sheets for US & UK.

Not ready to buy? That’s fine.

Get 3 free tips from the guide. No spam.