Council of Europe publishes Guidelines on the responsible implementation of AI systems in journalism

posted on 14 December, 2023   (public)

Practical Guidance issued on how AI systems should be used to support the production of journalism and  contribute to its resilience in the digital age


While the use of AI systems can be a competitive factor in the digital marketplace and for the resilience of journalism in the digital age, the ability of acccess and use AI systems in conformity with human rights and professional values is essential. Against this background, on 1 December
2023, the Steering Committee on Media and Information Society (CDMSI) adopted Guidelines on the responsible implementation of artificial intelligence (AI) systems in journalism which were elaborated by the Committee of Experts on Increasing Resilience of Media (MSI-RES).  

The Guidelines provide practical guidance to news media organisations and media professionals to implement journalistic AI systems in a responsible manner.
They also offer guidance for AI technology providers and platform companies and for States and national regulatory authorities to create favourable conditions for such a responsible implementation.

The Guidelines cover:

  • the use of AI systems in different stages of journalistic production, from the decision to use AI systems, the identification and acquisition of AI systems, to the incorporation of AI systems into professional and organisatorial practice. They point out that the decision to implement journalistic AI systems in the newsroom is a strategic choice with important consequences for internal processes and workflows.

  • They also look at the external dimension of using AI in newsrooms, i.e. its effect on the audiences and society.

  • The Guidelines also point at the respective responsibilities of technology providers and platforms, as well as member States and regulatory authorities. 

Focus on the role of regulatory authorities 

- (6.3.) Independent NRAs (or news media self-regulatory bodies) should be encouraged to help develop guidelines and standards for responsible use and development of journalistic AI (e.g. legal status of training data, best practice for fair data extraction, attribution and labelling of synthetic content, transparency and human oversight etc.). 
- (6.4.) Independent NRAs, news media self-regulatory bodies or standard setting bodies should be encouraged to help news organisations develop procurement guidelines, making available standard clauses for the responsible procurement of journalistic AI systems. (Annex 1 of the Guidelines provides a Procurement checklist) 
- (6.5.) Independent and accountable regulators may play a role to create the conditions for the critical review of the fairness of commercial relationships and contractual agreements between news organisations, platforms and technology providers (e.g. issues of possible imbalances of negotiation power for smaller/local news organisations).

Additional information:

- The work on the Guidelines was led by Richard Fletcher, from the Reuters Institute of the Study of Journalism (UK), and Natali Helberger, from the Institute for Information Law of the University of Amsterdam (NL). 
- The Guidelines were developed in parallel with the work on the Council of Europe’s future framework convention on AI 
- A new Committee of Experts will commence its work in 2024 with a focus on the impacts of generative AI for freedom of expression (MSI-AI)

(* A nicer publication format will be available in the new year)


Source: Council of Europe, Media Division

Further EPRA Background: The 9th meeting of the EPRA "AI & Regulators Roundtable" which took place on 29 September 2023 focused on AI-based tools and journalism with a presentation of Prof. Sophie Lecheler from the University of Vienna.