For most founders and business leaders, the race to implement AI usually centers on one thing: performance. We obsess over model accuracy, latency, and compute costs. But if you look at how major institutions like Standard Chartered are actually deploying these systems, the biggest bottleneck isn’t the technology itself—it’s the logistics of permission.
Before a single line of code is written or a model is trained, the most critical questions are now purely operational. Can this data be used? Where is it legally allowed to sit? Who goes to jail if the machine gets it wrong?
If you are building for scale, specifically across borders, here is why your AI strategy is about to become a geography strategy.
The Geography of Data
The “build once, deploy everywhere” mentality is hitting a hard wall. David Hardoon, Global Head of AI Enablement at Standard Chartered, notes that privacy functions are now the starting point of AI regulation. This isn’t just compliance paperwork; it dictates your entire software architecture.
In a global market, data sovereignty laws—rules about where data must physically reside—are forcing companies to rethink centralization. You might want a single, powerful central brain for your operations, but local regulations might demand you fragment that brain into smaller, local instances.
For business owners, this means your tech stack might need to be layered: a shared global foundation for general tasks, combined with strictly localized deployments for sensitive data. You aren’t just engineering for efficiency anymore; you are engineering for jurisdiction.
The “Pilot vs. Production” Trap
There is a massive gap between a controlled pilot and a live environment. In a pilot, your data is clean, contained, and understood. In production, especially in complex industries like finance, you are pulling from dozens of upstream systems with different schemas and quality issues.
When you add privacy constraints to this mix—such as the inability to use real customer data for training—you face a difficult trade-off. You may have to rely on anonymized data, which protects privacy but can significantly slow down development or degrade model performance. Recognizing this trade-off early is vital for setting realistic ROIs.
Accountability Can’t Be Automated
Here is the hard truth: Automation speeds up processes, but it does not remove responsibility. As Hardoon points out, even if you use external vendors or “black box” models, the accountability remains internal.
This reinforces the need for explainability. If an AI makes a decision that impacts a client or a regulatory obligation, a human needs to be able to explain why. The technology is only as robust as the people overseeing it. Training your team on data boundaries is arguably more important than training the model itself. The best firewall against privacy breaches isn’t software; it’s a staff member who understands what data can and cannot be touched.
The Way Forward: Standardization
How do you move fast without breaking things in this regulatory minefield? The answer is standardization.
Instead of treating every AI rollout as a unique bespoke project, successful organizations are creating pre-approved templates and architectures. By codifying rules around data residency and access into reusable components, teams can deploy faster without having to re-litigate privacy rules every single time.
As we move AI into everyday operations, privacy is no longer just a hurdle to jump over. It is the defining constraint that shapes what we build, where we build it, and ultimately, whether it works.







