Governments are dismantling the conditions for private thought. This is a practical guide for anyone who believes that what you read, say, and organise should remain your own business — not a dataset.
Privacy is not the refuge of those with something to hide. It is the condition under which people think freely, speak honestly, organise collectively, and live without fear. When that condition erodes, something irreversible happens to society — not all at once, but incrementally, in the small self-censorships that accumulate into silence.
Governments are moving to dismantle end-to-end encryption, require identity verification for ordinary online activity, and build the infrastructure of mass surveillance under the banner of safety. The banner changes. The infrastructure remains. Iran can cut 80 million people off from the open internet with a single order. That capability was not built overnight — it was assembled quietly, justified incrementally, normalised before anyone noticed the door closing.
The playbook is now being applied in liberal democracies. The UK proposes mandatory age verification — effectively a digital ID for the entire internet — framed as child protection. Similar legislation moves in the EU and US. Once infrastructure exists to verify identity before access, the question of what it restricts becomes purely political, answered by whoever holds power next. VPN bans are already being discussed as the logical next step.
The people who need privacy most are not criminals. Ross Ulbricht spent a decade in federal prison. The developers of Tornado Cash — a privacy tool, open-source code, mathematics — were prosecuted for writing software. Code is speech. Mathematics is not a crime. When governments jail developers for publishing privacy tools, the message to every engineer is explicit: build nothing that limits our surveillance.
This site does not offer paranoia. It offers tools. Practical, well-tested, increasingly necessary. Use what you need. Share what helps. Privacy is not a problem to solve once — it is a habit to maintain. And the time to build that habit is before the option is removed.
Privacy is not a feature of a free society — it is a precondition of one. Without the ability to think, communicate, and organise privately, there is no free press, no political opposition, no religious freedom, no space for dissent. Every authoritarian government in history has understood this, which is why surveillance is always one of its first tools.
On a personal level, privacy means you control your own narrative. It means an employer cannot see your medical history, an abuser cannot track your location, and a government cannot know which books you read or who you vote for. The erosion of privacy is not an abstract threat — it is the concrete narrowing of the space in which you live as a free person.
Yes — and so would cars, phones, cash, and the postal service. Every general-purpose technology can be misused. The question is never whether a tool can enable harm, but whether the harm it enables outweighs the good it provides to the overwhelming majority who use it legitimately.
Privacy tools are used daily by journalists protecting sources, doctors safeguarding patient data, domestic abuse survivors hiding from abusers, activists in authoritarian states, lawyers maintaining client confidentiality, and billions of ordinary people who simply want to browse the internet without being profiled. Dismantling privacy infrastructure to catch criminals is like banning locks because burglars can also pick them. It leaves everyone else defenceless.
Criminals with sufficient motivation and resources will always find workarounds. The only lasting effect of weakening privacy tools is to leave law-abiding people exposed.
This argument sounds reasonable and is almost entirely wrong. Three reasons:
First, what counts as "wrong" changes. Homosexuality was illegal in most of the world within living memory. Political dissent is criminalised in dozens of countries today. Helping a woman obtain an abortion is a criminal offence in several US states. The definition of wrongdoing shifts constantly — and surveillance infrastructure built under one government is inherited by the next.
Second, you are already being judged by data you don't control. Insurance premiums, loan approvals, job applications, and border crossings are all increasingly decided by algorithmic analysis of data you didn't know was being collected. "Nothing to hide" doesn't protect you from being misclassified, flagged, or discriminated against based on inferred attributes.
Third, privacy is not only about you. When you communicate with others, you make privacy decisions on their behalf. Journalists, activists, and people in dangerous situations depend on their contacts maintaining operational security. Your indifference to privacy can cost someone else dearly.
As Edward Snowden put it: arguing that you have nothing to hide because you have nothing to fear is like saying you have nothing to say so you don't need freedom of speech.
This is the most persuasive framing available for mandatory identity verification — and it deserves a serious answer, because the concern about children is genuine. But the proposed solution has four deep problems:
It doesn't work. Age verification systems are bypassed trivially. Roblox's AI-powered face-scan system was defeated within days. Discord's verification was circumvented with freely available browser tools. A determined teenager faces a fifteen-minute inconvenience, while everyone else loses their anonymity permanently. Wired ↗ PC Gamer ↗
It creates catastrophic security risks. Any system that verifies identity at scale becomes a database of who is online, when, and what they access. These databases are breached — it is not a question of if but when. A breach at a major identity verification provider in early 2026 exposed millions of users' personal data. Fortune ↗ New America ↗
It is inaccurate in both directions. Biometric age estimation misclassifies adults as minors and minors as adults at significant rates, as trials for Australia's under-16 social media ban demonstrated. Adults lose access to legal content; the systems they were designed to protect children from remain porous. The Guardian ↗
It pushes children somewhere worse. When mainstream platforms become inaccessible, young people don't stop going online — they migrate to less visible spaces with fewer rules, less moderation, and no accountability. The platforms most likely to harm children are exactly the ones least likely to implement compliant age verification. Locking the front door of the internet does not protect anyone; it redirects them to the back alleys.
Child safety is a real goal. Mandatory internet-wide identity verification is not the tool — it is the pretext. The infrastructure it creates survives long after the stated justification is forgotten.
This is the "going dark" argument, and law enforcement agencies have deployed it persistently for thirty years. It is also, on examination, almost entirely wrong.
Weakening encryption doesn't weaken it only for terrorists. Encryption is mathematics. You cannot create a backdoor that only the good guys can use — a vulnerability by definition can be exploited by anyone who finds it. Every security researcher, cryptographer, and intelligence agency that has examined this proposal has reached the same conclusion: there is no such thing as a secure backdoor. When the FBI pushed Apple for a key to the San Bernardino iPhone, the consensus from the cryptography community was unanimous.
Terrorists adapt; civilians don't. Determined adversaries — state-sponsored groups, organised criminals, serious terrorist networks — have the resources and motivation to use non-backdoored encryption, develop their own tools, or communicate through jurisdictions where backdoors don't apply. The practical effect of encryption bans is to leave ordinary citizens, businesses, hospitals, banks, and critical infrastructure exposed while sophisticated bad actors route around the restriction within days.
The intelligence failures that precede major attacks are almost never about encryption. Post-attack investigations routinely reveal that agencies had prior information they failed to act on, not that communications were unreadable. The problem is analytical capacity and inter-agency coordination, not the existence of encryption. Sacrificing the security of billions of people to solve a problem that isn't actually encryption is a very poor trade.
Strong encryption is what keeps your banking, your medical records, your messages, and critical national infrastructure secure. Banning it to catch terrorists would be like banning locks to catch burglars — and finding that the burglars simply brought bolt cutters.
In most democratic countries: yes, unambiguously. Using a VPN, running Tor, encrypting your messages, and transacting in privacy-focused cryptocurrency are all legal activities in the European Union, the United States, the United Kingdom, Canada, Australia, and the vast majority of the world's democracies. Signal, ProtonMail, and Firefox are consumer products used by hundreds of millions of people daily.
There are exceptions worth knowing about. Russia formally restricted VPN usage in 2017 and has tightened controls since. China operates the Great Firewall and technically prohibits unauthorised VPN use, though enforcement is uneven. Iran, North Korea, Belarus, and a handful of other authoritarian states restrict or ban privacy tools outright.
Note what this list tells you. The countries that ban privacy tools are also the countries with the worst human rights records, the most imprisoned journalists, and the most surveilled populations. The correlation is not coincidental — it is causal. A government that bans privacy tools is a government that fears what its citizens might say in private. The legality of these tools in your country is itself a data point about the character of your government.
If you are in a country where these tools are restricted: the tools still work, and many people in those countries use them anyway. That is precisely what this site is for.
It is less overwhelming than it looks. You do not need to do everything at once, and partial privacy is still genuinely valuable. Think of it as four steps, each one meaningful on its own:
Step 1 — Browser. Switch to Firefox with uBlock Origin, LibreWolf, or Brave. This is a five-minute change that immediately stops the majority of third-party tracking, ad networks, and data collection that happens every time you open a webpage. If you do nothing else, do this.
Step 2 — Search. Replace Google with DuckDuckGo or SearXNG. Google's core business model is building a profile of your interests, health concerns, political views, and personal relationships from your search history. DuckDuckGo keeps none of it. The search results are good enough for almost everything.
Step 3 — Messaging. Install Signal and start using it with the people you communicate with most. One conversation moved to Signal is one conversation that can no longer be subpoenaed, sold, or hacked from a corporate server. You don't need to convince everyone — even a few key relationships matter.
Step 4 — VPN. Add Mullvad or Proton VPN for situations where your network itself is untrustworthy: public wifi, your ISP selling your browsing history, or accessing content across borders. A VPN is not a silver bullet, but it meaningfully raises the cost of casual surveillance.
Those four steps cover the vast majority of everyday exposure for most people. Everything else in the toolkit — Monero, Tails OS, Qubes — exists for specific threat models. Start with step one today.
This is the most understandable objection and the most important one to push back on. Defeatism is the most effective tool surveillance has — it requires no coercion, just the widespread belief that resistance is futile.
Partial privacy is real privacy. The goal is not to become invisible — it is to raise the cost of surveillance. Mass surveillance works because it is cheap: harvesting data from billions of people at essentially zero marginal cost per person. Every technical barrier you introduce — an encrypted message, a VPN connection, a browser that blocks trackers — increases the cost of surveilling you specifically. When enough people do this, mass surveillance becomes mass expensive surveillance, which is qualitatively different.
Data minimisation matters even when it's incomplete. A company that doesn't have your search history can't sell it, leak it, or hand it to a government. A message encrypted end-to-end can't be read in a data breach. A transaction in Monero can't be frozen by a financial institution. Each tool you use removes one category of exposure, permanently, regardless of what other data exists about you elsewhere.
The landscape is not static. Privacy tools are getting better, faster, and easier to use. Regulatory pressure is increasing. Public awareness is growing. The argument that "it's too late" was made in 2013 after Snowden, in 2018 after Cambridge Analytica, and will be made again after the next scandal. It has never been true. The people building Signal, Tor, Monero, and the rest are not losing — they are winning incrementally, on technical grounds, against adversaries with vastly more resources.
You cannot opt out of every system. You can make yourself significantly harder to surveil than the average person, protect your most sensitive communications, and support the infrastructure that makes privacy possible for everyone. That is not a lost cause. It is the only cause worth having.