WHEN: 8 July 2020, 14:30 – 17:30
WHERE: Online via Zoom
ORGANIZED BY: Office of the OSCE Representative on Freedom of the Media
LIVESTREAM RECORDING: https://youtu.be/vpEh36kgMf0
MORE INFO: From the Organization for Security and Co-operation in Europe (OSCE)
In the online information ecosystem, AI is often deployed for detecting, evaluating and moderating content at scale, in view of identifying and filtering out illegal and potentially harmful content online. At the same time, AI is used for ranking, promoting and demoting as well as monetizing massive amounts of content available. The monetizing business models of internet intermediaries rely on the collection and processing of vast amounts of data about internet users, and often amplify sensational content. While most of these processes and AI-powered tools lack transparency and accountability, as well as effective remedies, their increasing use accelerates existing challenges to freedom of expression, access to information and media pluralism.
The RFoM event on “The rise of artificial intelligence and how it will reshape the future of free speech” will provide a platform to discuss the role of AI in shaping and arbitrating online information, and the impact this has on freedom of expression. The event will focus on the possible way forward to safeguard free speech when deploying automation and AI. An RFoM Strategy Paper with preliminary recommendations, which will provide the foundation for these discussions, will be published ahead of the event.
The event will bring together a broad audience, including experts from participating States, civil society, academia, the tech industry and other international stakeholders in order to enable interactive discussions.
First session: The use of AI in content moderation: challenges and risks for freedom of expression
Following opening remarks by the OSCE Representative on Freedom of the Media, Harlem Désir, the first session will focus on the use of AI in content moderation, and the challenges this poses to freedom of expression. While the general need for regulatory frameworks for moderating speech will be discussed, a focus will be put on how AI accelerates existing challenges to free speech. The session seeks to address both the role and responsibilities of internet intermediaries in addressing illegal and potentially harmful content, such as security threats or ‘hate speech’, as well as the particular role of States in ensuring responsible moderation of content without harming democratic discourse. The discussions will also outline how the impact of AI on free speech has become even more visible during the COVID-19 pandemic.
Second session: How does the use of AI to rank, monetize and curate content impact free speech?
[Including CDT’s Emma Llanso]
The second session will focus on how AI is deployed for sorting, prioritizing and curating content, and the impact such technologies have on plurality and diversity online. The session will also address how content curation is closely linked to the monetization of content by some private actors, and the impact that such business models as well as AI-powered State surveillance have on access to information and freedom of expression. A focus will also be put on how AI accelerates existing challenges to free speech, and on the safeguards that are required to guarantee freedom of expression and media freedom, and ensure democratic discourse.
The event will be followed by a consultation phase and expert meetings in order to develop concrete policy recommendations on how to safeguard free speech when AI is deployed.
For any questions regarding substantive issues, please contact Julia Haas, [email protected].