It’s true that generative artificial intelligence, also known as GenAI or GAI, is making phishing emails and other social engineering tricks harder to spot. But the bad guys haven’t won the cyber wars yet.
In the spirit of Newton’s Third Law (for every action there is an equal and opposite reaction), GenAI is becoming a powerful force for good, too. The quicker organizations understand how automating certain AI tasks adds cyber protection layers, the safer they will be.
“I don’t think that anyone can say with authority that either side is pulling away,” said Michael Tanji, director of cybersecurity for MxD, the National Center for Cybersecurity in Manufacturing as designated by the U.S. Department of Defense. “The good news is that both sides are pushing the other to innovate, which is generally how you produce superior outcomes.”
In a recent MxD article, Tanji warned that AI will increase the frequency and success rate of cyberattacks. In this interview, edited for space, Tanji provides the argument for an aggressive AI defense. ”Basically, as quickly as the bad guys can apply AI to a problem, good guys can respond,” he said.
Q: We know AI can create a better phishing email. How can it defend against cyberattacks?
MT: The use of AI for cyber defense, by and large, is about doing things better and faster than humans can. Here are four examples:
- AI systems trained on what “normal” network traffic and user behavior look like can quickly detect deviations, and help recognize new or novel threats.
- Machine learning (ML) can be used to analyze file characteristics, code patterns, and email content to counter malware, as well as detect phishing attempts by analyzing tone, content, and sender irregularities.
- AI can automate security threat responses by blocking access, isolating compromised systems, and applying patches.
- AI-powered vulnerability scanning can quickly identify weaknesses in systems and applications. More importantly, it can prioritize vulnerabilities.
Q: Does AI replace other defenses, or add to them?
MT: AI is not a silver bullet and requires careful planning, disciplined execution, and ongoing management. Everything starts with having clear objectives, and it ends with knowing what “success” looks like.” AI should be a solution to a specific problem, not pixie dust you sprinkle over everything to make your problems magically disappear. Quantifiable goals will help measure return on investment and refine your AI strategy. Start with pilot projects on specific, well-defined use cases. This allows you to learn, refine your approach, and demonstrate value before a full-scale deployment.
Q: What is AI’s strength in terms of defensive activity?
MT: AI significantly amplifies the effectiveness, speed, and scale of human security efforts. This is particularly crucial in a landscape where threats are escalating in sophistication and volume. Humans simply cannot keep up with the processing of data at combat speed. AI and machine learning algorithms can analyze terabytes of data in real time and analyze patterns and anomalies.
AI is also very good at identifying emerging trends and predicting where and how attacks might occur. This comes from its ability to analyze historical attack data and current threat intelligence. AI is also a superior companion to humans because it can automate initial responses like blocking malicious IPs, triggering patch deployment, and isolating compromised systems, freeing up humans for more complex tasks.
It also doesn’t give in to ‘“alert fatigue.” AI can filter out benign alerts and false positives, letting people focus on truly critical threats.
Q: Are there weaknesses to an AI-based defense?
MT: While AI offers tremendous advantages, it can also introduce challenges and vulnerabilities that could hinder cyber defense. These include:
- Data Poisoning: Attackers can inject malicious data into an AI model’s training dataset, causing the model to learn incorrect patterns or biases.
- Evasion Attacks: Attackers can craft inputs that are imperceptible to humans but designed to trick an AI model into misclassifying malicious content as legitimate, allowing attacks to bypass AI defenses.
- Model Inversion Attacks: Attackers might try to reconstruct sensitive training data from an AI model’s outputs, potentially exposing confidential information.
- Difficulty in Understanding Decisions: Many advanced AI models are “black boxes,” meaning it’s hard to understand why they made a particular decision. This can hinder human analysts from validating AI’s findings, troubleshooting errors, or building trust in the system.
- Dulling Human Skills: Excessive reliance on AI automation could lead to a decline in humans’ critical thinking, problem-solving skills, and intuitive threat-hunting capabilities.
Q: Can you recommend some specific AI programs to use?
MT: It wouldn’t be appropriate to endorse specific programs, but there are different types. General-purpose large language models (LLMs) have broad knowledge of a wide range of topics, including security topics. But general-purpose models also have weaknesses. They lack the deep, nuanced understanding of specialized cybersecurity contexts. Specialized LLMs are fine-tuned on massive, domain-specific cybersecurity datasets, so they develop a highly accurate and nuanced understanding of cyber threats and defensive techniques. Generally, they outperform general-purpose LLMs on specialized tasks due to focused training. And they are less likely to generate incorrect or irrelevant information or AI hallucinations. Open-source base models also can be fine-tuned and deployed on-premise, which addresses data privacy concerns for sensitive security data.
Visit the MxD Virtual Training Center for information on cybersecurity workforce training resources.