The “move fast and break things” era of AI implementation is officially hitting a regulatory wall—and for business leaders, that is actually good news.
The European Telecommunications Standards Institute (ETSI) has released EN 304 223, the first globally applicable European Standard for AI cybersecurity. If your organization is embedding machine learning into core operations, this is your new baseline.
Here is the executive summary on what this standard changes for your tech stack and governance.
Solving the “Who Owns the Risk?” Dilemma
One of the biggest friction points in enterprise AI has been accountability. When a model fails or gets breached, who is responsible?
This standard forces clarity by defining three technical roles:
- Developers
- System Operators
- Data Custodians
The Catch for Founders: These lines blur easily. If your financial firm fine-tunes an open-source model for internal use, you are likely classified as both a Developer and a System Operator. This dual status triggers stricter obligations—you must secure the infrastructure, document the training data provenance, and audit the design.
Security by Design, Not as an Afterthought
The standard makes it clear: security cannot be a patch applied at deployment. It must be baked into the design phase.
This introduces Threat Modelling as a mandatory step. You need to anticipate AI-native attacks like data poisoning (where bad data creates bad outputs) or model obfuscation. Furthermore, developers are now required to restrict functionality. If your model can process images but you only use it for text, you must disable the image processing capabilities to reduce the attack surface.
No More “Black Box” Procurement
Supply chain security is a major focus. The days of accepting opaque AI solutions from third-party vendors are over. Under the new standard:
- Transparency is Mandatory: If you use components that aren’t well-documented, you must justify and document the risk.
- Verification: Developers must provide cryptographic hashes to verify model authenticity.
- Audit Trails: For publicly sourced training data (common in LLMs), the source URL and acquisition timestamp must be documented.
The Lifecycle Shift: Retraining is Redeployment
Perhaps the most critical operational change is how updates are handled. The standard dictates that major updates—such as retraining a model on new data—count as a new deployment.
This means renewed security testing and evaluation every time you significantly update your model. Additionally, monitoring is no longer just about uptime; it’s about security. You must analyze logs for “data drift”—gradual changes in behavior that could signal a breach.
Why This Matters Now
This isn’t just red tape. It aligns directly with the upcoming EU AI Act and provides a defensible position for future audits. By enforcing clear roles and asset inventories (you can’t secure shadow AI you don’t know exists), this standard provides a structure for safer, more sustainable innovation.







