Articles

How AI Is Reshaping Cyber Risk for Manufacturers

Cybersecurity is becoming a bigger challenge for manufacturers as hackers gain access to the latest artificial intelligence models and use them to infiltrate computer systems.

Some experts warn of a potential “bugmageddon,” as AI tools have advanced to the point that they can find system vulnerabilities that went undetected for years, or write malware code with little or no human assistance.     

AI firms, businesses and the federal government are scrambling to manage this evolving risk landscape. In April Anthropic said it would limit the release of its Mythos AI model and engage a select number of companies to test new defenses, citing concerns that the business world isn’t ready to cope with what AI can do in the wrong hands.

“AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities,” Anthropic explained.

Similarly, OpenAI is taking the same precautions with the rollout of its new version of ChatGPT. Both Anthropic and OpenAI want to give certain trusted users a chance to road-test the new models and make certain the companies aren’t unleashing a hacker’s delight.  

Cybersecurity experts aren’t in a panic, but they are rallying to understand the stakes and formulate defensive strategies for managing the new era of AI. Anthropic CEO Dario Amodei suggested the best defense against malicious AI use may well be defensive AI; both are good at coding and detecting vulnerabilities. “We haven’t trained it specifically to be good at cyber. We trained it to be good at code,” Amodei said, “but as a side effect of being good at code, it’s also good at cyber.” 

We asked for guidance on these AI developments from Michael Tanji, director of cybersecurity for MxD, the National Center for Cybersecurity in Manufacturing as designated by the U.S. Department of War. Answers have been edited for clarity and space.

Q.: How quickly is AI changing the cybersecurity threat landscape? 

Michael Tanji: Whether or not any given AI model is sufficiently advanced enough to merit the kind of attention being raised, there is no denying that AI, when used for offensive purposes, operates at a speed and scale beyond human capability. This means that threat actors are going to find vulnerabilities faster and in greater numbers than even a few months ago. Not all of those vulnerabilities will be exploitable, but even a small percentage represents meaningful risk. As AI-enabled attacks increase, both the frequency and impact of exploitation are likely to grow. 

Q.: Does this require a change in posture for companies? 

M.T.: Yes. At this point, if you’re not looking at ways to integrate AI into your defensive activities, you’re borderline negligent. You don’t have to rip out your current investment in cybersecurity and start shopping, but you need to identify where AI can augment current tools, workforce capabilities, and future procurement decisions. The machines are here, and if you’re not able to work as fast or thoroughly as they do, you’re going to lose. 

Q.: What’s an appropriate defensive posture? 

M.T.: Not everyone has the budget or human resources to take full advantage of AI-centric cybersecurity tools, and that’s OK. Start by making sure your security team knows how to write good prompts. Provide them with the ability to build basic tools and agents to help deal with the deluge of problems that are going to come down the line. You can do a lot with a locally run model on commodity hardware (the better to keep your proprietary information offline and out of someone else’s model). 

Q.: Who currently has the advantage — the bad guys or the good guys? And by what margin? 

M.T.: The bad guys have the edge but it’s less of a function of them being first or necessarily “better” than defenders.  Attackers aren’t dealing with the constraints that defenders are. They don’t have a board to report to. They don’t have to fight with the CFO for budget. They don’t have to deal with user resistance to change. 

It will take time for existing security solutions to adapt to AI-driven threats, and many organizations will need to incrementally enhance legacy systems rather than replace them outright. This is probably not going to be adequate to the task, so the suffering will continue until the cybersecurity marketplace has shaken out. 

Independent cybersecurity researchers and industry stakeholders are actively working to better understand what these models can and cannot do, along with the broader security implications. This work will help inform more effective defenses over time.

More News

Articles

MxD Selected as Finalists for 4 Manufacturing Leadership Council Awards 

The Manufacturing Leadership Council (MLC) announced that three MxD projects and one team member were selected as finalists for 2026 awards for outstanding achievements that...

Read More
Articles

When Service Providers Become Cyber Risk Factors 

Last year, criminals launched a devastating cyber attack against a British retailer with...

Read More
Articles

New Virtual Trainings Help Youth “Test Drive” Careers in Manufacturing

Four virtual modules designed to help youth “test drive” an emerging career in...

Read More