LinkedIn officially joins EU code on hate speech suppressions – TechCrunch
Microsoft-owned LinkedIn has pledged to do more to quickly eliminate illegal hate speech from its platform in the European Union by formally joining a self-regulatory initiative that aims to address the issue through a voluntary code of conduct.
In statement today, the European Commission announced that the professional social network has joined the EU code of conduct to tackle illegal hate speech online, with Justice Commissioner Didier Reynders welcoming the (albeit late) participation of LinkedIn, and adding in a statement that the code “is and will remain an important tool in the fight against hate speech, including within the framework established by the legislation on digital services”.
“I am inviting more businesses to join, so that the online world is free from hate,” Reynders added.
While LinkedIn’s name has not been formally associated with the voluntary code until now, it said it had “supported” the effort through parent company Microsoft, which was already on the list.
In one declaration on his decision to officially join now, he also said:
“LinkedIn is a place for professional conversations where people come to connect, learn and find new opportunities. With the current economic climate and the increased trust job seekers and professionals around the world place in LinkedIn, our responsibility is to help create safe experiences for our members. We could not be clearer that hate speech is not tolerated on our platform. LinkedIn is an important part of our members’ professional identity throughout their careers – it can be seen by their employer, colleagues and potential business partners.
In the EU, ‘illegal hate speech’ can mean content that espouses racist or xenophobic views, or that seeks to incite violence or hatred against groups of people because of their race or skin color. , religion or ethnicity, etc.
A number of Member States have national laws on the issue – and some have adopted their own legislation specifically focused on the digital sphere. Thus, the EU code is complementary to any current legislation on hate speech. It is also non-legally binding.
The initiative began in 2016 – when a handful of tech giants (Facebook, Twitter, YouTube and Microsoft) agreed to speed up illegal speech suppressions (or, alternatively, associate their brand names with the public relations opportunity associated with the announcement of their intention).
Since the Code became operational, a handful of other tech platforms have joined the TikTok video-sharing platform last October, for example.
But many digital services (especially messaging platforms) are still not participating. Hence the Commission’s call for more digital service companies to join.
At the same time, the EU is tightening strict rules in the area of illegal content.
LLast year, the Commission proposed extensive updates (aka the Digital Services Act) to existing e-commerce rules to establish operational ground rules which they believe aim to align online laws with offline legal requirements – in areas such as illegal content, even illegal goods. Thus, in the years to come, the bloc will have a legal framework that will tackle – at least at a high level – the problem of hate speech, and not just a voluntary code.
The EU also recently passed terrorist content removal legislation (in April) – which is expected to start applying to online platforms from next year.
But it’s worth noting that, on the perhaps more controversial issue of hate speech (which can deeply overlap with freedom of expression), the Commission wants to maintain a channel of self-regulation alongside incoming legislation – as pointed out. Reynders’ remarks.
Brussels obviously sees the benefit of having a mixture of “carrots and sticks” with regard to the problems of digital regulation of hot buttons. Especially in the controversial “danger zone” of speech regulation.
So, as the DSA prepares to incorporate standardized ‘notice and response’ procedures to help digital players respond quickly to illegal content, keeping the hate speech code around, there is a parallel channel where key platforms could be encouraged by the Commission to commit to going beyond the letter of the law (and thus enable lawmakers to avoid controversy if they attempted to introduce broader speech moderation measures in legislation).
For several years, the EU has also had a voluntary code of practice on online disinformation. (And a spokesperson for LinkedIn confirmed that it had subscribed to it since its inception, also through its parent company Microsoft.)
And while lawmakers recently announced a plan to strengthen this code – to make it “more binding,” as they oxymorically put it – it certainly does not contemplate legislating on this (even more blurry) rhetoric issue.
In further public remarks today on the hate speech code, the Commission said that a fifth monitoring exercise in June 2020 showed that, on average, companies reviewed 90% of reported content within 24 hours. and removed 71% of content considered illegal hate speech. .
He added that he welcomed the results – but also called on signatories to redouble their efforts, especially in providing feedback to users and in how they approach transparency in reporting and removal.
The Commission has also repeatedly called on platforms adhering to the disinformation code to do more to tackle the tsunami of “fake news” stuck on their platforms, including – on the public health front – what ‘they called a coronavirus infodemic last year.
The COVID-19 crisis has undoubtedly helped focus the minds of lawmakers on the complex issue of effectively regulating the digital sphere and has likely accelerated a number of EU efforts.