posted on
27 June, 2023
(public)
Failures and lack of transparency: the Slovak regulator investigates the role played by platforms in the spread of harmful content
Following the terrorist attack in Bratislava outside a LGBTQ+ bar on 12 October 2022, the Council for Media Services (CMS) published a report entitled "The Bratislava Shooting, Report on the role of online platforms", jointly produced with Reset, investigating the role of Twitter, Facebook, Instagram, and YouTube before and after the Bratislava shooting.
Going further, the Slovak regulator, in partnership with Trust Lab*, has produced a new report investigating the spread of harmful content and the effectiveness of report mechanisms on four major online platforms: Facebook, Instagram, TikTok and YouTube.
Findings:
-
Prevalence of harmful or potentially illegal content online: Despite the efforts of the CMS, the analyzed platforms still host harmful and/or potentially illegal content related to the terrorist attack. Access to such content is relatively easy: Trust Lab’s monitoring identified 253 unique links to such content, of which 123 were viral posts gathering thousands of views.
-
Content moderation failures: Trust Lab reported all 253 instances of harmful and/or potentially illegal content to the respective services via their user reporting mechanism. Despite these reports, the platforms removed only 12 pieces of content (Facebook - 8x, Youtube - 3x, Instagram - 1x, TikTok - 0x). Surprisingly, when a sample (N=26) of manifestly illegal content was escalated by a national regulatory authority, and not just users, the platforms took swift action and removed all reported 26 pieces of content.
-
Non-functional reporting mechanisms: Considering the absence of any meaningful response from the platforms, it may be concluded that the mechanisms for reporting harmful and/or potentially illegal content available to users are essentially non-functional.
-
Borderline content: The monitoring revealed several instances of borderline content, most of which was available on TikTok. The content included misleading, harmful or potentially illegal elements, such as incitement to violence or praise of terrorist acts. Its ambiguity effectively precludes both the regulator and the platforms from enforcing the law and terms of service respectively.
Recommendations:
-
Improve transparency of content reporting mechanisms: The reporting mechanisms available via the user interface often lack transparency and effectiveness. If a report is made, it is not possible for the user to track or review its status.
-
Remove systemic barriers to data access: The user interface often lacks sufficient data portability and accessibility which prevents the users from making accurate reports of potentially illegal content. For example, TikTok does not allow users to either use the ‘Find‘ command or copy the URLs of comments underneath posts, which renders the DSA orders against illegal content useless. Platforms ought to provide users with user friendly notice and action mechanisms fostering transparency and compliance monitoring.
-
Dedicate enough resources for content moderation in smaller markets: It is clear that platforms neglect content moderation in smaller markets. The harms, however, are at least as serious as in other, much larger, regions. For this, the CMS recommends platforms to dedicate sufficient financial and human capital to content moderation efforts guided by an increased cooperation with the local research communities.
-
Downranking of content in crisis situations: The vast majority of harmful and/or illegal content was found to be produced in the first 48h after the attack. It thus seems reasonable to suggest that platforms downrank the content produced during a major crisis, such as a terrorist attack, by non-credible users and instead promote trustworthy information following ethical journalism standards.
The report also attempts to provide for an exploratory template for assessing the systemic risks of platforms under the Digital Services Act regime.
Source: Council for Media Services (SK)
* Trust & Safety Laboratory (Trust Lab) was founded in 2019 by senior Trust & Safety leaders from Google, YouTube, Reddit, and TikTok with a mission to make the web safer for everyone.