Global cybersecurity agencies have issued the first unified guidance on applying artificial intelligence (AI) within critical infrastructure, signaling a major shift from theoretical debate to practical guardrails for safety and reliability.
The release of joint guidance on Principles for the Secure Integration of Artificial Intelligence in Operational Technology marks a meaningful milestone for critical infrastructure security because major global cybersecurity agencies, including CISA, the FBI, the NSA, the Australian Signals Directorate’s Australian Cyber Security Centre, and other partners, have aligned on a shared direction. As AI adoption accelerates across operational environments, this document moves us from theory to practice. It acknowledges AI’s promise while making clear that it also “introduces significant risks—such as operational technology (OT) process models drifting over time or safety-process bypasses” that operators must actively manage to ensure reliability.
The guidance draws a firm distinction between safety and security, emphasizing that large language models should be used to make safety decisions for OT environments and urges operators to adopt push-based architectures with strong architectural boundaries, maintain human-in-the-loop oversight, and demand transparency from vendors embedding AI into industrial systems. It frames AI as an adviser rather than a controller, reinforcing that resilience depends on skilled operators, clear validation procedures, and visibility into how AI models interact with the physical world.
A central contribution of this guidance is its clear distinction between safety and security in the AI era. Protecting the integrity and availability of systems is not the same as preventing physical harm, and AI complicates this relationship in ways many CISOs are now expected to navigate. The guidance recognizes that AI’s non-deterministic nature can lead to unpredictable behaviors or hallucinations. This is why it draws an explicit line: “AI such as LLMs almost certainly should not be used to make safety decisions for OT environments.”
The message is not a rejection of innovation. It is a pragmatic call to preserve the safety foundations that operational technology depends on. For example, in a water treatment facility, a generative model might misinterpret sensor anomalies and make a recommendation that inadvertently adjusts chemical dosing. Even if security controls are intact, the safety implications can be immediate and physical.
The architecture recommendations extend that safety-first mindset. The guidance maps where AI belongs within the OT hierarchy with clarity. Predictive Machine Learning can strengthen operations at levels 0 through 3, such as forecasting pump failures based on vibration patterns or identifying anomalies in turbine exhaust temperatures. Meanwhile, large language models are better suited for business functions at levels 4 and 5 where they assist with documentation, work order generation, or regulatory reporting.
The guidance also cautions against introducing new attack vectors. To reduce inbound risk, agencies recommend “push-based or brokered architectures that move required features or summaries out of OT without granting persistent inbound access”. This pattern prevents scenarios where an adversary could exploit a cloud-hosted AI system to pivot directly into OT networks. In other words, AI should act as an advisor rather than a controller, supporting operations without becoming an unseen entry point for adversaries.
Importantly, the document looks beyond systems to the humans who operate them. It warns that “heavy reliance on AI may cause OT personnel to lose manual skills needed for managing systems during AI failures or system outages.” For critical infrastructure, this is not theoretical. Many power plant and water utility operators are already experiencing a loss of skilled workers as employees retire. The guidance encourages organizations to train operators not only on how to use AI, but also on how to challenge it. For example, personnel should be able to validate AI outputs using alternative sensors and observations to confirm that digital recommendations align with physical reality. A compressor temperature anomaly flagged by an ML model, for example, should still be correlated with on-floor readings by humans before operators take corrective action.
The guidance also recommends that critical infrastructure owners should develop strong procurement strategies that take AI into account. Organizations are encouraged to “demand transparency and security considerations from OT vendors regarding how AI technologies are embedded into their products.” This includes requiring SBOMs (or AIBOMs) that specify where models are sourced and hosted, and ensuring that vendors disclose whether they are training those models on an operator’s sensitive data.
Many CISOs are finding that AI-enabled features are being added quietly into third-party software and SaaS without clear disclosure. This guidance supports a shift toward secure by demand, giving operators the clarity to make informed choices before AI features are embedded deep into their environments.
Finally, the document reaffirms that accountability sits with people. It reminds us that “ultimately, humans are responsible for functional safety.” The recommended “human in the loop” model ensures that AI informs decisions but does not replace human judgment. This approach mitigates challenges such as “model drift” and avoids the risk of blindly executing “black box” outputs in environments where the stakes include real human safety. For example, as refinery equipment ages, model drift can cause a machine learning model to predict failure thresholds that are too low, making it critical for operators to regularly validate the model over the asset’s lifetime.
As we move forward, the path is both challenging and hopeful. This shared global guidance gives operators a clearer map, and it reinforces that resilience grows when humans and machines work in partnership. A practical next step is to review where AI already touches your OT landscape, then establish or refresh validation procedures that keep operators engaged and confident. You can also begin early conversations with vendors about transparency requirements, which helps set expectations before new capabilities are deployed. In a landscape shaped by rapid innovation, these proactive actions will help ensure that safety and trust remain at the center of progress.
Diana Kelley is the chief information security officer at Noma Security. She has also held senior leadership roles at major technology and cybersecurity companies, including Microsoft Cybersecurity Field CTO, Global Executive Security Advisor at IBM Security, and GM at Symantec.
The post New cybersecurity guidance paves the way for AI in critical infrastructure appeared first on CyberScoop.
Leave a comment