Disinformation & online platforms: focus on two reports from France and Ireland

posted on 28 September, 2021   (public)

Standardised procedures, more transparency and collaboration with researchers: the recommendations of the French CSA and the Irish BAI

On 21 September, the French Conseil supérieur de l'audiovisuel (CSA) released its second assessment report on the measures taken by platforms to tackle information disorder, as required by the Law on the fight against information disorder (LOI n° 2018-1202 du 22 décembre 2018 relative à la lutte contre la manipulation de l'information). Meanwhile, on 16 September, the Broadcasting Authority of Ireland (BAI) published and presented "CovidCheck", a report analysing the actions of online platforms with regard to Covid-19 disinformation.

Both the CSA and the BAI organised an online event to present the outcomes of their respective reports. 

The new report assessment by the CSA

Based on data reported by 11 platforms* for the year 2020, this second report essentially assesses the measures taken to tackle Covid-19 disinformation but also the implementation of the recommendations stated by the CSA in its previous report.

In general, the CSA welcomes the increasing quantity and quality of data provided by the platforms and their willingness to cooperate with the regulator. However, some major and crucial points still require further collaboration and transparency from the stakeholders.

As a result, the CSA issued 16 recommendations that the platforms should take into account in the forthcoming (yearly) monitoring exercise. Among the points raised by the regulator, the following issues are worth highlighting:

  • Notice and complaint mechanisms: Even though they have been implemented by all stakeholders, their accessibility and visibility are still not deemed sufficient for some actors (search engines especially). The CSA calls for better communication with the complainants and human intervention in the decision-making process (automatic moderation has increased in response to the challenges raised by the Covid-19 crisis, with enhanced risks of over-removal of content).
  • Use of algorithms: The CSA calls for more transparency towards the users of the services, as well as more information on the implementation and rules of ethics applying to recommendation algorithms. Such data would allow a better understanding and analysis of the risks of manipulation.
  • Promotion of official content: The regulator acknowledges the efforts achieved by platforms with regards to the promotion of content from press agencies and media providers during the health crisis.
  • Fight against accounts massively spreading false information: The CSA recommends to strengthen the collaboration between stakeholders to avoid such accounts to reappear and to better inform users on the range of measures to tackle disinformation on their services.
  • Commercial communications spreading false information: In this regard, more transparency and data are required to better identify and assess the risks at stake.
  • Media literacy: The regulator encourages online platforms to carry on and strengthen the collaboration with researchers and among platforms.

More generally, the CSA also recalls the crucial necessity of ensuring compliance with individual rights and especially the right to freedom of expression within content moderation mechanisms. Freedom of expression and manipulation of information will be a key focus of the forthcoming CSA assessment as presidential elections will be held in France in 2022.

*Dailymotion, Facebook, Google (Google Search et YouTube), LinkedIn, Microsoft (Bing et Microsoft Advertising), Snapchat, Twitter, Unify (Doctissimo), Webedia (, Wikipédia and Verizon Media (Yahoo Search).

The CovidCheck report of the BAI

Commissioned by the BAI to the Institute for Future Media, Democracy and Society (FuJo) at Dublin City University (DCU), this report is part of the Code of Practice on Disinformation's monitoring project implemented by the European Regulators Group for Audio-visual Media Services (ERGA) and the third in this series*.

The document is based on the reports shared by platforms (Facebook, Google, Microsoft, Mozilla, TikTok and Twitter) regarding their Covid-19 actions but also on some further Irish case studies on Facebook, Twitter and AI and Automation. In a nutshell, some key elements of the CovidCheck and recommendations from the BAI are as follows:

  • Actions taken: Only 32% of the actions were new initiatives, the most common actions being: links to information by the World Health Organisation (WHO) or national health authorities (25%); advertising responses (17%) and blocking, removing or demoting content (13%).
--> The Irish regulator recommends the establishment of meaningful KPIs to report results and outcomes regarding key areas such as content and account removals, factchecking, content labels and media literacy campaigns.
  • Regional scope: 34% of the actions covered all of the European Union countries while 12% covered some but not all EU Member States
--> The BAI calls for a standardisation of processes: On the one hand, standardised reports from platforms to ensure that relevant information - such as the regional application of the measures - is provided; On the other hand, standardised procedures to verify the implementation of the platforms' actions.
  • Use of algorithms: Even though platforms referenced the use of AI to deal with manipulative behaviour, fraudulent commercial content and deepfakes, the percentage of actions taken through automated versus human moderation remains unclear.
--> The BAI calls for more data and transparency regarding the use of automated systems (what mechanisms are implemented, languages covered, the kinds of disinformation they are trained to detect, the risk assessments conducted…). Such data should be shared with the regulators and the researchers.
  • Disinformation formats: Problematic content come usually more from the comments under a post than from the post itself.
--> The regulator encourages stakeholders to introduce a framework addressing disinformation in comments in compliance with the European Convention of Human Rights and the right to freedom of expression.

The BAI also calls for the inclusion in the Code of Practice of a commitment to establish an independent auditor and in general deplored the lack of country-specific data and the slow implementation of the previous recommendations.

In any case, as pointed out by Celene Craig, Deputy Chief Executive of the BAI, "more systematic and detailed monitoring will require significantly increased resources".

*The ElectCheck 2019 report on the political advertising on platforms (see EPRA news) and the CodeCheck report in 2020, focused on activities to empower consumers and the research community.

Source: CSA and BAI websites