AI-driven tools in the media: Council of Europe issues background paper ahead of ministerial conference

posted on 20 March, 2020   (public)

'The goal should be to use the affordance of new technologies to create optimal conditions for […] freedom of expression to flourish'.

On 3 March 2020, the Council of Europe published a background paper ahead of the Ministerial Conference 'Artificial Intelligence - Intelligent Politics - Challenges and opportunities for media and democracy' which was initially planned to take place on 28-29 May and was postponed to 22-23 October.

The background paper, authored by leading academics from the Institute for Information Law (IViR) of the University of Amsterdam under the leadership of Natali Helberger, aims to assess the risks of the increasing use of AI-driven tools in the media and to provide food for thought on what could be done to transform the risks (manipulation, censorship, propaganda or misinformation) in opportunities to foster freedom of expression (in the light of Article 10 ECHR) and the overall quality and diversity of the information offer.

The document hightlights that the use of AI-driven tools operates at the intersection of freedom of expression, the right to privacy and prohibition of discrimination and that the regulatory frameworks and the division of responsibilities between regulatory authorities need to consider the way in which the different human rights interlink.

AI-driven tools can affect three main areas of the media landscape: support of the journalists in research and content production (fact-checking tools, translating, data-processing tools…), content production (automated generated content) and content distribution (recommendation systems). The paper adopts a three-pronged approach by looking at:

  • Implications for the news media: who is responsible for the AI-driven tools and their effects? How to translate values such as objectivity and diversity in a digital environment? How to ensure editorial independence?
  • Implications for news users: how to avoid manipulation and political use of the AI-driven tools? How to secure freedom of expression regarding automated content moderation? How to protect privacy and to avoid filter bubbles?
  • Implications for society: how to ensure that AI-driven tools do not affect the market structure for news media, especially small and local media by favouring new data-rich players?

Based on the analysis of the use of AI-driven tools in the light of Article 10 ECHR, the report draws a number of conclusions and highlights the need for further initiatives. This includes notably:

Regarding the news media industry

  • Promoting experimentation with, and investment in AI-driven tools
  • Translating journalistic values in a digital context by encouraging the development of professional algorithmic ethics standards (new internal procedures, transparency and explainability of AI-driven tools, use of AI to promote freedom of expression)
  • Media reporting on the impact of AI to reveal the threats and potential dangers / Clarification of the editorial responsibility regarding automated processes (e.g. recommender systems or robot journalism)
  • Securing necessary resources and independence for public service media.

Regarding users

  • Identification and protection of vulnerable users’ groups to ensure ‘equal enjoyment of freedom of expression’
  • Providing a clear framework of rights and responsibilities of each actor, including the users. This includes developing solutions that give users more control over the impact of AI tools on their media consumption.

Regarding society

  • Member States should ensure access to skills and technological tools for local, smaller and community media and promote diversity and innovation
  • Member States and all audiovisual actors, including  regulators, should assess the real impact of AI-driven tools on media pluralism and the effective exposure to diversity.
  • Member States should put in place appropriate measuring frmeworks and indicators to assess the risks stemming from AI tools to diversity, social cohesion and the maintenance of a resilient public sphere.



















The paper also points out the lack of empirical evidence and research so far on the consequences of the use of AI in the media. It is still difficult to fully comprehend the effects, which may also differ depending on the socio-cultural context. In any case, AI-driven tools can allow media to provide a more accessible, responsive, high-quality and stimulating content for the benefit of public debate if used in optimal conditions.

Source: Council of Europe

Further EPRA background: the opportunities and challenges related to the use of Artificial Intelligence in the media was the topic of a plenary session at the 50th EPRA meeting in Athens in October 2019