Google, despite being a company that has been devoted to censorship in recent years, has become the first Big Tech company to criticize the Canadian government’s plan to fight harmful content online. According to YouTube’s owner, Canada’s proposal could result in increased removal of legitimate content.
The Canadian government proposed a new law in July 2021 that would lead to the creation of a new watchdog, the Digital Safety Commission (DSC), which would hold online platforms liable if they do not remove flagged harmful content within 24 hours. The proposed law identifies five types of harmful content that platforms would be required to remove after a complaint is made: terrorism, hate speech, incitement to violence, child sex exploitation, and non-consensual sharing of sexual images.
In a blog post in Google Canada, the company noted that the requirement to remove content within 24 hours could be exploited by some groups to restrict legitimate speech.
“It’s essential to strike the right balance between speed and accuracy,” the company wrote. “User flags are best utilized as ‘signals’ of potentially violative content, rather than definitive statements of violations.”
In an attempt to demonstrate that flagging of harmful content by users is not effective, Google said that in the second quarter of 2021, out of the 17.2 million videos flagged by users on YouTube, only 300,000 were ultimately removed. But, in the same period, the company removed 6.2 million for violating its policies.
Canada’s proposed online safety law also recommends that platforms monitor content for the five harmful categories before users post. Google believes that could further increase the censorship of legitimate content.
“Imposing proactive monitoring obligations could result in the suppression of lawful expression … and would be out of step with international democratic norms,” the company wrote.
The research chair of Canada’s Internet and E-Commerce Law at University of Ottawa Michael Geist agrees with Google, as noted by Global News. He called the proposed law “deeply flawed.”
According to Geist, hate groups could use the law to target anti-hate groups; and since there is a 24-hour time constraint and penalties for platforms, they would succeed.
“Google suggests that it’s actually going to lead to over-blocking, over-removal of content,” he said. “Companies are warning that there is a threat to freedom of expression. And that threat extends to the groups that we’re trying to protect.”
Geist also explained that Google, and other platforms, were most likely going to use AI to proactively monitor content that could get flagged. Geist feels that “raises concerns, especially for vulnerable communities, given the potential for bias within these AI systems.