Twitter is all over the place these days trying to cope with the pressure that social media platforms are finding themselves under. Twitter’s effort ranges from banning all political and “issue” ads, to imagining a future where it will stop being an “evil” centralized corporation and begin a new life as a community-driven decentralized network.
In the meantime, however, in the same vein of attempting to improve its position and image in a world increasingly hostile to its industry, Twitter is announcing expansion of its Trust and Safety Council.
The Council was established in 2016 and is currently made up of 40 “experts” and organizations who have an advisory role when it comes to Twitter’s “products, programs, and rules.”
According to a blog post by the company, the feedback it received on how to improve the work of the Council has been to – bring in more members, with the goal of making it more diverse and engaged in “deeper conversations.” The post itself, however, does not delve any deeper into where diversity and quality of conversation has been lacking so far, or who specifically will be brought to the Council to help address these problems.
Double your web browsing speed with today's sponsor. Get Brave.
What Twitter did reveal is that new members will join the Council in January and that some of them will be experts coming from perspectives previously apparently unrepresented.
The blog post, that’s rich in corporate buzzword and short on details, also reveals that Twitter maintains ongoing engagement with NGOs and activists in the form of meetings and will in the future communicate more about these activities. But the social media platform has no doubt that this form of cooperation makes it “better and safer.”
Twitter also shared it will group the Council’s members to cover more efficiently some specific real-world harm concerns, such as safety and online harassment, human and digital rights, child sexual exploitation, suicide prevention, and mental health. Another group, the company said, will work on issues “we face as we broaden our interpretation of dehumanization.”
The goal that is to be achieved here is to help the company cope with “emerging trends and risks” and make people (ostensibly, its users) “feel safe.”