It is up to the people to make the decision whether or not they watch or read something. As sovereign adults, we have a right to choose to watch or not to watch something. We do not need to be babysat. Adults don't need babysitting.
Certain free speech regulations could now be upon us. A global political assembly line is now taking action to censor the internet by passing anti-speech laws.
The U.K government unveiled plans to hold tech companies liable for having “harmful” content on their platform. Of course, what can be interpreted as harmful content has a broad subjective element to it. Therein lies the confusion, and the danger of passing such laws. This doesn’t prevent the U.K government from seriously attempting to pass such general laws in order to establish some sort of control over content that gets distributed online. Online information is anybody's game, it is a fair game. A lot of big company investors don't like that, they want to control the stream of information because they think they can.
The Internet Association group already raised their concerns mentioning that the recommendations in the draft policy are “too wide in their scope,” and that “something more practical and specific” is something they are instead willing to work with.
Facebook's head of UK public policy Rebecca Stimson, seemed to side and not side with the U.K government's sentiment of stricter control. She mentioned that new policies are needed to protect the internet, but they have to defend digital innovation at the same time. Seems like she took the safe route by providing an in-between answer.
Internet companies could face up to billions of dollars in fines for having “harmful” content on their platform. This is far too dangerous for in any stretch of the imagination as so many things can be deemed harmful, the subjective ambiguity is crazy.
In the Senate, Senators Mark Warner (D-Va.) and Deb Fischer (R-Neb.) introduced a bill that would regulate online companies especially the larger ones, in what is deemed as manipulative data grabbing practices. For example, LinkedIn recently was on fire for apparently sending email invites to users who have not yet joined the platform. How did they find these people? They were email contactees from LinkedIn members. LinkedIn manipulated their members to click on a button that sent invites to all of their email contacts that they uploaded to LinkedIn from their address book.
So is it fair now to create a board that manages and policies these types of practices? Maybe an argument that can be made, if companies lie about claims they make. If a platform is just being tricky with their words and just manipulating you through language then I think it is much less of an argument there. Deceitful practices are common among all media and advertising. To make a big fuss of general deceitful practices that is common in marketing is a stretch. More importantly, the consequences of such regulations are far worse.
Again, what can be deemed as dark or manipulative practices is too vague for it to be managed correctly. Until we get something specific enough, we cannot enforce large scale bans on certain types of behavior as the room for opinionated instead of factual penalties are too high.
The Canadian Liberal democratic institutions minister says self-regulation is not “yielding results,” in the wake of the mass shooting in New Zealand. Since the content on social media platforms affects Canadians, she says it is not fair. The Canadian government has been for some time now pushing for regulations with regards to hate speech, discrimination, misinformation campaigns, and election meddling, and not without some success. New laws in Canada were recently instated that disallows Google political ads during federal elections.
It is not surprising that Canada has joined the party right after the shooting to censor speech online.
If censorship does get passed for violent content, the money the government can make off of fines are astronomical. This, of course, depends on how rigorous and strict the laws will be enforced. But getting a chunk of money from the collective network of the social media platforms can result in large dividends of money. And when there's money to be made, motivations are never honest.
This process can weed out the smaller competition, as a heavy regulatory climate might put too much of a burden for them, leaving only the strong to survive.
Several agencies such as The American Civil Liberties Union, and the Electronic Frontier Foundation to name a few, responded to the recent attack of the First Amendment. They published a document arguing that the first amendment applies both offline and online. In it, they state, “Congress shall make no law . . . prohibiting the freedom of speech,” the law accounting in both online and offline squares.
A recent report came out, stating how the U.K government is also in the works of making laws to ban trolls online. If you compare that to real life, trolling is what a lot of us do to our friends for laughs. Imagine that it was against the law to troll, all because in a few situations people take serious offense to it? Sounds like a script from the Brave New World to me.
The very first anti regulating agencies passed back in 1996, in what was called the U.S., the Communications Decency Act. The act in part, was there to prevent the spread of child pornography on the internet. Pornography was spreading like wildfire during those times when the internet was still new, and included in them were minors. The law heavily protected internet users who were under 18.
However, it went past the point of nudity or porn. It went into the realms of free speech. The dangerous tipping point was when they started to ban certain indecent words related to sex or printed novels. If anything similar was posted on the internet, it would've been unlawful. This had effects even in medical communities who were just trying to promote scientific literature. Eventually, after large protests, including ones where several popular websites made their screen dark in an effort to raise awareness, they got through. Eventually, most of the regulations that were in the act were removed.
Some aspects of the U.S., the Communications Decency Act still remains intact today. One of them being, being free of liability for distributed content online. If you publish content online, it is not the same as being an author. The act helps enforce this decision.
Censorship never works. It has failed time and time again throughout history. History repeats itself, however, as we see already in Australia, with the censorship currently in effect. Since the shooting happened there so close to home, Australia took immediate action and without much discord, they were able to pass a law that penalizes you for hosting any type of violent content.
If this continues, then it could spell out trouble. Australia is a large place, and for them to have already taken action to censor speech online, alarms me quite a bit. Panic and hysteria spiral even the best of us out of control.
That is why the key is not to panic in the first place, when tragic events like this happen, despite the media always glamorizing it. Otherwise, our freedom of speech is going to become narrower and narrower before we even realize it. Large collectives of people are already trying to manage free speech in too many ways when it should be left to only a few when it is clearly and inarguably inappropriate. Trying to regulate online content based off of moral principles is far too complex for humans to manage. It is frankly trying to play god, which can be mighty disastrous.