On July 29 2024, a teenager walked into a children’s Taylor Swift-themed dance class in Southport, England, and murdered three young girls with a knife. He injured ten others.
It was, by any measure, one of the most horrifying attacks on British soil in recent memory, and what followed should have been a reckoning with the catastrophic state failures that let it happen.
Instead, the British government looked at the smoldering aftermath and decided the real enemy was the internet, and the solution just so happens to be the mass surveillance censorship proposals the government is already working on.
After the attack, outrage on social media turned to protests. Protests became riots. And the state’s response landed with a speed and ferocity that it had never managed to direct at, say, the agencies that let a known danger walk free for years.
Reclaim Your Digital Freedom.
Get unfiltered coverage of surveillance, censorship, and the technology threatening your civil liberties.
A former childcarer named Lucy Connolly was jailed for 31 months for a single post on X. That is three months longer than the sentence given to a man who physically attacked a mosque during the same period of unrest.
The UK was already a country where arrests for “offensive” social media posts had nearly doubled in seven years, climbing from 5,502 in 2017 to 12,183 in 2023. The overall conviction rate for those arrests was falling at the same time. Police were locking people up for what they typed at a rate that was going up, while the number of convictions that actually stuck was going down.
The Southport riots became the accelerant. A House of Commons Home Affairs Committee report used the unrest to call for a “new national system for policing” with enhanced capabilities to surveil social media activity, framing public anger as a problem of online “misinformation” rather than a consequence of the state’s own failures.
The state was dodging accountability by demanding censorship and surveillance and blaming the internet for unrest.
And now, months later, Sir Adrian Fulford’s Southport Inquiry Phase 1 report has arrived, and it takes the whole dynamic further still. Not just further toward punishing people for what they say online, but toward watching everything they do online, and everything they buy offline, too.
The report itself is 763 pages across two volumes, published on 13 April, with 67 recommendations. Its central finding is devastating.
The attack “could have been and should have been prevented.” Multiple state agencies failed repeatedly to act on years of warning signs. The attacker’s parents bore “considerable blame” for not reporting Axel Rudakubana’s worsening behavior.
Sir Adrian identified five areas of systematic failure, including critical breakdowns in information sharing and a repeated tendency to excuse the attacker’s behavior on the basis of his autism spectrum disorder.
The factual record of those failures is staggering. The attacker was referred to the Prevent counter-terrorism program three times between 2019 and 2024, with each referral closed without sustained action.
He purchased weapons, including three machetes, as well as ingredients to make the poison ricin. Police responded to five calls at the family home. And in March 2022, when the attacker was found on a bus with a knife, admitting he wanted to stab someone and thinking about poison, he was simply returned home with advice to hide the knives.
The report said that had this incident been judged in light of the attacker’s past risk, he would have been arrested, and his possession of an al-Qaeda manual and ricin seeds would have come to light.
You might think the resulting 67 recommendations would focus on making sure the people who are paid to protect children actually protect them. Some of them do. But a significant chunk has nothing to do with fixing the human laziness that ultimately killed three girls, and everything to do with building an internet surveillance apparatus that would make the average dystopian novelist blush.
Recommendation 12 asks the government to “consider systems to detect and report concerning online behaviour and suspicious combinations of purchases.”
It lists VPN use alongside name changes as behavioral red flags worth automated detection. The same recommendation wants reporting systems for “concerning purchases of dangerous but legal items (e.g., sledgehammers, bow and arrows and smoke grenades)” and “concerning combinations of purchases (e.g. castor beans, alcohol, and laboratory equipment).”
Anyone who has ever renovated a kitchen, taken up archery as a hobby, or ordered laboratory glassware because they fancied making gin is now, apparently, a person of interest.
Recommendation 24 goes after VPNs directly, asking Phase 2 to “consider age verification for the use of Virtual Private Network (VPN) software and other options to avoid VPNs being used to circumvent the age-related protections in the Online Safety Act 2023.”
Recommendation 20 calls for “mandatory reporting and information-sharing about suspicious behaviour” around knife sales, alongside “strengthening online age-verification and age verified delivery standards” and “prohibiting some online sales.”
Recommendation 19 tells Amazon to “improve its measures to prevent children from making purchases,” to “review its systems for recording details of the recipient to ensure that an accurate record of the recipient is obtained,” and to “audit its training of age verified deliveries for drivers, in particular for Amazon Flex drivers.”
Amazon is being told to collect more data about everyone who receives a parcel. The company already uses “trusted ID verification services to check name, date of birth and address details whenever an order is placed for these bladed items” and has “an age verification on delivery process that requires drivers to verify the recipient’s age through an app on their devices.”
Recommendation 22 tells Lancashire County Council to ensure frontline staff “have access to effective tools and guidance to identify and respond to” online risks, specifically naming “the risks associated with the use of Virtual Private Networks, which can enable children to bypass the safeguards established under the Online Safety Act 2023.”
It asks the Department of Health and Social Care to consider whether “reforms to national guidance, policy or training are required.” Social workers are now expected to treat VPN use as a safeguarding red flag. The same tool, you will recall, that Parliament itself told its own members to install on their phones.
Here is where the whole thing becomes genuinely absurd. VPN use in Britain exploded because the government’s own Online Safety Act censorship law forced it.
When age verification rules took effect in July 2025, Proton VPN reported a sustained 1,800 percent increase in UK sign-ups. Five VPN apps hit Apple’s UK App Store top 10 within days. Millions of ordinary people downloaded privacy tools to avoid handing their biometric data to random websites as the government’s own rules demanded.
And the government’s response to this entirely predictable mass adoption of privacy software is to propose restricting privacy software.
The House of Lords voted in January to ban VPN use by under-18s, backing an amendment to the Children’s Wellbeing and Schools Bill by 207 votes to 159. Labour’s Lord Knight acknowledged that VPNs could “undermine the child safety gains of the Online Safety Act” but warned that age-gating them could be “extremely problematic.” He noted: “My phone uses a VPN, following a personal device cyber consultation offered by this Parliament. VPNs can make us more secure, and we should not rush to deprive children of that safety.”
For now, MPs haven’t gone along with it. But the rejected proposals are only one implementation of such ideas.
So Parliament tells its own members to use VPNs. Parliament then votes to ban children from using VPNs, which would require age checks and giving up privacy. And a public inquiry now wants social workers to flag VPN use as a risk indicator.
Age verification amounts to requiring adults to give up their personal or biometric data to access lawful content. This is the throughline that connects Southport to the wider censorship machine. The government passes laws requiring identity verification to access legal content. People use privacy tools to avoid handing their identity to strangers. The government then classifies those privacy tools as suspicious.
At each step, the scope of surveillance expands and the definition of “concerning behavior” gets broader, and at no point does anyone go back and fix the actual agencies that let a teenager with an al-Qaeda manual and ricin seeds, three machetes, and multiple Prevent referrals walk free for years.
The rest of the surveillance proposals are not aimed at known threats. They are aimed at the whole population. They propose systems to track what you browse, what you buy, and whether you dare to use a VPN, then flag combinations that some algorithm decides look suspicious.
The Southport Inquiry confirms what the arrest statistics, the sentencing disparities, and the legislative agenda already made obvious. Britain has developed a very specific institutional reflex. When its agencies fail catastrophically, the state responds by expanding surveillance of the general population.
When the public expresses anger about those failures, the state responds by censoring the expression of that anger. The definition of “offensive” keeps expanding. And the people who actually had the information needed to prevent a massacre keep their jobs.
What failed at Southport was not a lack of data. It was not the absence of purchase-tracking algorithms. It was not that VPNs exist. What failed was human beings in positions of authority who saw danger, documented it, filed the paperwork confirming they’d seen it, and then closed the case and went home.
Building a national internet surveillance system won’t change that. Age-gating the privacy tools that Parliament recommends to its own members won’t change that. Nothing in this report’s surveillance wishlist addresses the reason three girls are dead, which is that the system already knew, and the system chose to do nothing.

