Does digital ID have risks even if it’s ZK-wrapped?

The following is a guest post and opinion from Evin McMullen, Co-founder & CEO at Billions.Network.

ZK Won’t Save Us: Why Digital Identity Must Stay Plural

Zero-knowledge (ZK)-wrapped identity was lauded as a silver bullet to solve everything about presenting yourself online—providing verifiable, privacy-preserving proof of personhood without the need to trust governments, platforms, or biometric databases.

But as Ethereum founder Vitalik Buterin argued in Juneencryption alone can’t fix “architecture-level” coercion. When identity becomes rigid, centralized, and one-size-fits-all, pseudonymity dies and coercion becomes inevitable.

The risks Vitalik raised in his recent post are not just theoretical. They are the inevitable outcome of systems that try to impose a single, fixed identity on a pluralistic internet. One account per person sounds fair—until it becomes mandatory. Add ZK proofs to the mix, and all you’ve done is encrypt the shackles.

Digital identity is becoming an important issue for governments, as shown by the G7 commissioning a report last year to inform policy, and the EU’s summit in Berlin in June to assess its regulatory framework for electronic identities and trust services.

The Limits of ZK Alone

Zero-knowledge proofs allow users to prove statements—age, residency, uniqueness—without revealing underlying personal data by using cryptographic methods. It’s like showing a sealed envelope that everyone can confirm holds the right answer, without anyone ever opening it. In theory, this should support privacy. But as Vitalik rightly argues, the problem is not what the proofs hide, but what the system assumes.

Most ZK-ID schemes rely on a core design principle: one identity per person. That might make sense for voting or preventing bots. But in real life, people operate across many social contexts—work, family, online, etc.—that don’t map neatly onto a single ID. Enforcing a one-person, one-ID model, even with ZK wrappers, creates a brittle system that’s easy to weaponize.

In such a system, coercion becomes a trivial matter. Employers, governments, or apps can demand that a user reveal all their linked identities. Pseudonymity becomes impossible, especially when IDs are reused across applications or anchored to immutable credentials. Even the illusion of unlinkability breaks down under pressure from machine learning, correlation attacks, or good old-fashioned power.

What began as a privacy tool becomes surveillance infrastructure, but with a nicer interface.

Identity Isn’t the Problem; Uniformity Is

ZK-wrapped systems don’t fail because ZK is flawed; they fail because the surrounding architecture clings to an outdated concept of identity that’s singular, static, and centralized. That’s not how humans operate, and it’s not how the internet works.

The alternative is pluralism. Instead of one global ID that follows you everywhere, imagine a model where you appear differently to each app, platform, or community—provably human and trustworthy, but contextually unique. Your credentials are local, not universal. You’re verifiable without being traceable. And no one, not even you, can be coerced into revealing everything about yourself.

This isn’t a fantasy. It’s already working.

Profile DIDs and the Case for Context-Based Identity

One approach already in production uses per-app Decentralized Identifiers (DIDs) so that even colluding platforms can’t link a user’s personas.

It’s a structural fix, not just a cryptographic one. Instead of building global registries that bind people to a single identity, we can anchor trust in pluralistic models featuring decentralized reputation graphs, selective disclosure, unlinkable credentials, and ZK proofs that enforce contextual verification rather than static identifiers.

This system is already used by over 9,000 projects, including TikTok and Deutsche Bank. And it’s not just for humans. The same framework powers Billions Network’s DeepTrust initiativeextending verifiable identity and reputation to AI agents—a necessity in an internet increasingly shaped by autonomous systems.

Don’t Fight Surveillance With Better Locks

Some see identity as a necessary evil—a way to prevent misinformation or spam. But good identity design doesn’t require surveillance. It just requires context.

We don’t need one ID to rule them all. We need systems that let people prove what’s needed, when needed, without turning every interaction into a permanent record. Want to prove you’re not a bot? Fine. Prove uniqueness. Want to prove you’re over 18? Great. Do it without handing over your birthdate, postcode, and biometric template.

Crucially, we must resist the urge to equate compliance with centralization. Systems that use coercive biometrics, rigid registries, or global databases to enforce identity may look efficient. But they introduce potentially catastrophic risks: irreversible breaches, discrimination, exclusion, and even geopolitical misuse. Biometric data can’t be rotated. Static IDs can’t be revoked. Centralized models cannot be made safe; they can only be made obsolete.

Vitalik Is Right, But the Future Is Already Here

Vitalik’s essay warns of a future where identity systems, even when built on the best cryptography, accidentally entrench the very harms they set out to prevent. We share that concern. But we also believe there’s a way forward: one that doesn’t compromise on privacy, enforce uniformity, or turn people into nodes on a global registry.

That path is pluralistic and decentralized, and it’s already live.

Let’s not waste our best cryptographic tools on defending broken ideas. Instead, let’s build the systems that match how people actually live and how we want the internet to work.

The future of digital identity doesn’t need to be universal. It simply needs to be human.

Mentioned in this article