I’ve been pondering the curious contradictions in British tech policy lately. It’s almost as if the left hand doesn’t know what the right hand is doing.
The government has recently pushed through legislation requiring tech companies to break end-to-end encryption — effectively demanding backdoors into secure communications. Meanwhile, they’re proudly announcing plans for substantial investments in artificial intelligence development.
This paradox becomes even more perplexing when you look at the planned job cuts across the public sector. NHS England is being disbanded, and it’s not yet clear how that will impact the former NHS Digital roles it recently subsumed. Whitehall, meanwhile, readies itself for a cull of 10,000 civil service positions. One must wonder: who exactly will be implementing this grand AI strategy if they’re showing digital professionals the door?
The encryption laws reveal a troubling gap in understanding. End-to-end encryption isn’t merely a feature that can be selectively disabled for government access whilst remaining intact for everyone else. It’s a fundamental security principle — once compromised, it’s compromised for all.
This raises an uncomfortable question: if our policymakers don’t fully grasp the nature of encryption, how can we trust their judgment on more complex technologies like AI?
The timing of these public sector cuts couldn’t be more contradictory to the government’s stated technological ambitions. AI implementation requires expertise — not just in the technology itself, but in the specific contexts where it will be deployed.
The NHS digital cuts are particularly concerning. Healthcare represents one of the most promising yet sensitive areas for AI application. Reducing this expertise seems counterproductive to any serious AI strategy in healthcare.
I can’t help but feel there’s a fundamental disconnect between the government’s technological aspirations and its practical understanding of what those aspirations require.
Building digital and AI capabilities isn’t just about funding announcements or policy papers. It requires investment in human capital — the very people who understand how to integrate these technologies effectively into public services.
If Britain truly wants to be a technological leader, we need policies that reflect a genuine understanding of the technologies they target. We need a workforce strategy that aligns with our technological ambitions, not one that contradicts them.
I’m not suggesting that every minister needs to be a coding expert or a cybersecurity specialist. But surely we should expect our collective decision-making to be informed by technical expertise?
Without addressing these contradictions, I fear we’re setting ourselves up for disappointment. Bold announcements about AI investment mean little if we lack the expertise to implement them effectively, or if we simultaneously undermine digital security through poorly conceived legislation.
As citizens, perhaps we should be asking more questions about the technological literacy behind our policies. Some might say that our digital future depends on it.
Last modified: 27 March 2025