Search

Ofcom reports on the use of AI in content moderation

posted on 05 August, 2019   (public)

Artificial Intelligence: an imperfect but indispensable tool for moderation

On behalf of Ofcom, Cambridge Consultants recently released a report entitled “Use of AI in content moderation” questioning the role and capabilities of automated approaches such as Artificial Intelligence and Machine learning in online content moderation and harmful content monitoring.

The increase of the computational powers at low cost and the collection of data encourage the industry to invest into and to develop AI technologies and algorithms are likely to become a key component of audiovisual and broadcast media industries. Against this background, the report intends to identify the specific challenges encountered by online media stakeholders in the field of AI content moderation systems and suggests ways to enhance AI technologies capabilities in response.

The report points out the following key findings:

Regarding the use of AI for the moderation itself:

  • AI is key: moderation requires a cultural awareness and contextual understanding of the relevant community ‘standards’. In this regard, human input is indeed necessary. However, nowadays, the large scale of content to moderate and the negative psychological impact already observed among platform “human moderators” make the AI tools the essential component of content moderation systems. First, AI tools can perform pre-moderation and reduce the number of content requiring human intervention. Then, AI can assist humans in the post-moderation (translation of text, prioritisation of contents to be reviewed…) [cf figure below]. Therefore, moderation should be a workflow combining automated systems and human moderators. Moreover, the availability and use of AI and automated systems should be encouraged, for platforms of all sizes.
  • Audit and transparency: AI faces a lack of trust from the public, in particular due to the possible unconscious human or technological bias (such as the parameters choices of the algorithms or the selection of data used for training or validation) which may lead to prejudicial and unfair decisions. Moreover, as mentioned above, different definitions or legal rules may apply to harmful content depending on the context or the country and a general content moderation system is likely to create inconsistencies. According to the report, in response to these issues, on one hand, audits and trainings shall be conducted in order to regularly analyse the impact of the algorithms and to adapt them if necessary. On the other hand, stakeholders shall provide transparency regarding their AI systems to enable users and the public to fully understand the decision-making process applying.
  • Datasets must be shared: user-generated content is increasingly difficult to analyse, due to the high variety of format and the complexity of content combining various elements (video, audio, text, emoticons…), which require to be interpreted as a whole to determine whether or not it is harmful. In order to get a better understanding of users' behaviour and to keep up to date with “evolving categories of harmful content”, the sharing of datasets between the platforms and the moderation service providers shall be encouraged. Sharing data will also help stakeholders to get a better cultural awareness and contextual understanding. Moreover, from the data collected, AI will allow the creation of more training data to test the algorithms.

Three ways in which AI improves the effectiveness of content moderation - Source: Cambridge Consultants

Regarding the use of AI to limit the “online inhibition effect”:

The anonymity, the asynchronous communication or even the empathy deficit of the online world can explain the lack of restraint felt by users and the increase of online harmful content. AI can be used to promote socially positive engagement in order to impact online behaviour. For instance, here are some nudging methods that AI tools can provide: prompting users to think again before publishing or proposing an alternative text less harmful. The platforms may also add a Chatbot for a direct interaction with the users or inform the users about the content they are about to see.

For a better understanding of AI concepts, the report provides in an annex a summary and further explanation of key AI technologies.

Source: Ofcom Website


Additional EPRA background: AI will be the topic of a plenary session at the upcoming EPRA meeting in October in Athens. See the draft agenda for more details.

Countries