In Europe and Britain, new laws are being rolled out under the reassuring banners of ‘safety’ and ‘responsibility’. The EU’s Digital Services Act (DSA) and Britain’s Online Safety Act (OSA) are sold as protecting citizens from disinformation and harmful content. In reality, they are building the most powerful censorship machinery the West has ever seen.
This did not happen overnight. The EU began with ‘voluntary’ disinformation codes in 2018, moved to fast-takedown laws like Germany’s NetzDG, and then codified it all in the DSA. Britain followed suit: decades-old offences for ‘grossly offensive’ communications were absorbed into the OSA, which now hands Ofcom the power to impose fines of up to 10 per cent of global revenue on platforms that fail to meet speech-policing duties. The pattern is clear: first voluntary codes, then statutory duties, then massive fines, then enforcement.
The greatest danger is how sloppily these laws define what they punish.
The EU’s DSA requires platforms to mitigate ‘systemic risks’ from disinformation in areas as broad as health, elections, immigration, and climate. Who decides what counts as ‘disinformation’? Regulators, not citizens. In Britain, ‘false communications’ can be criminal if they cause ‘non-trivial psychological harm’ – a standard so vague it could cover satire, political debate, or religious truth-claims.
The intellectual foundations for these laws come straight from groups like HateLab at Cardiff University. Their 2013 All Wales Hate Crime Research Project defined hate crime and hate incidents as anything ‘perceived by the victim or any other person’ as motivated by prejudice. Intent doesn’t matter. Truth doesn’t matter. Only perception matters. That means a joke, a meme, a religious conviction, or a culturally rooted opinion can all be logged as ‘hate’. Speech becomes a minefield – not because of what you mean, but because of how someone, somewhere, might perceive it.
The UK already offers a glimpse of where this logic leads. Official police data reported in The Times earlier this year shows that around 12,000 people are arrested annually – roughly 30 a day – for allegedly offensive or harmful online posts, mostly under the Malicious Communications Act and the Communications Act. These are not fringe extremists but ordinary citizens who fall foul of speech codes built on perception and offence. The OSA will only intensify this trend, giving regulators sharper tools and bigger penalties.
This is not theoretical. In Canada, Dr Jordan Peterson was hauled before the College of Psychologists because anonymous complainants alleged that his remarks on podcasts or Twitter were offensive. He wasn’t accused of breaking any law, but vague claims of harm were enough to trigger professional sanctions. That is where the DSA and OSA logic leads: prosecutions or penalties based not on intent or fact, but on perceptions of offence. Ordinary people can be targeted for joking, for humour misread as harm; for religious belief, with truth claims treated as ‘hate speech’; for cultural views, with opinions rooted in tradition branded as prejudice; or for genuine convictions, statements believed to be true but later declared ‘misinformation’. The chilling effect is obvious: if saying what you believe might be prosecuted because someone perceives it as harmful, fewer people will risk speaking at all.
The final and most dangerous step is when governments tie these censorship regimes to Digital ID systems. Right now, censorship is largely platform-level: posts deleted, accounts suspended. With Digital ID, the system becomes personal. Every social media account is tied to a verified government ID. Every flagged post can be traced to a real-world citizen. Sanctions move from the platform to the individual.
Imagine this pipeline: you post a comment questioning election procedures or vaccine policy. An NGO ‘trusted flagger’ perceives it as harmful and reports it under DSA or OSA-style duties. Because your account is tied to a Digital ID, enforcement is automatic: a warning logged against your ID, a fine deducted, or a suspension of access to online services. If your ID is also your wallet for banking, health, or employment verification, your livelihood now depends on never offending the speech codes. At that point, it is too late to push back. The infrastructure is in place, and compliance is hardwired into daily life.
Australia is at the crossroads. We already have the Online Safety Act (2021) and a powerful eSafety Commissioner. We have a voluntary disinformation code that could be made mandatory. We are rolling out Digital ID pilots under myGovID and state driver’s licence apps. The pieces are on the table. If we allow new hate crime definitions and misinformation laws to creep into our statute books, they will marry up with Digital ID. And once that marriage is made, Australia will have created a censorship system more total than anything dreamed of in Orwell’s 1984.
Europe and Britain show us the future: sloppy definitions, perception-based offences, mass arrests, massive fines, central regulators, and Digital ID enforcement. Australians still have time to resist. But the lesson is stark: once hate-crime laws and ‘safety’ regulations are in place, and once your access to work, money, and communication depends on your Digital ID, the point of no return has already passed.
And this is the message our Parliament must hear now. If MPs and Senators allow ‘misinformation’ or ‘hate crime’ bills to pass – as they tried in 2023 with ACMA’s disinformation proposal – they will be voting to lay the rails for a censorship system from which there is no retreat. It is not a debate about safety. It is a debate about whether Australians will still be free to joke, to believe, to argue, to dissent. The choice is between the open society we inherited, or the digital control grid now being built in Brussels and London.
Once Digital ID and censorship laws are fused together, safety will be the last freedom left.


















