AI Policy & Governance, European Policy, Free Expression, Government Surveillance, Privacy & Data
EU Tech Policy Brief: May 2021 Recap
This is the May 2021 recap issue of the Centre for Democracy & Technology Europe’s monthly Tech Policy Brief. It highlights some of the most pressing technology and internet policy issues under debate in Europe, the U.S., and internationally, and gives CDT’s perspective on them. Our aim is to help shape policies that advance our rights in a digital world. You can subscribe here.
Op-ed: European Plans to Regulate Internet Will Have Major Impacts on Civic Space at Home and Abroad
In her latest op-ed, CDT Europe Director Iverna McGowan explains what the implications of the Digital Services Act (DSA) will be for online civic space. The DSA presents a unique opportunity to tackle opaque algorithms and recommender systems of online platforms, phenomena that can disproportionately impact vulnerable and at-risk groups and those who work to protect them.
However, the current draft risks giving repressive governments the opportunity to digitalise “Ministries of Truth” by empowering them to suppress speech and silence and undermine the crucial work of civil society. The effects could be especially dramatic in states where civic space and the rule of law is already under pressure: How would a government agency in Poland treat the online content of LGBTIA+ activists? How would the online speech of those standing up for refugee rights be handled in Hungary? The stakes are high, and a robust participatory process before a final draft is adopted will be vital to ensure that courts are empowered by ministries of “Justice” rather than “Truth”.
The piece was first published on Open Global Rights, and is available in English, Spanish, French, and German.
CDT Europe Responds to the Council of Europe’s CAHAI Consultation on a Legal Framework on AI
CDT Europe was grateful for the invitation to submit a response to the Council of Europe’s Ad hoc Committee on Artificial Intelligence (CAHAI) consultation process. The consultation was intended to examine the feasibility and potential elements of a legal framework for the development, design, and application of AI, based on the Council of Europe’s standards for human rights, democracy, and the rule of law. The consultation survey covered a broad range of potential applications of AI, and raised many complex policy questions. The survey’s structure, however, did not allow for the nuance and in-depth analysis that addressing these topics will ultimately require. Accordingly, CDT Europe used this opportunity largely to share some of our overarching concerns about ways in which AI can result in discrimination and exploitative uses. Key points of the submission focussed on:
- legal mechanisms and binding instruments;
- risk-based approaches;
- auditing and human rights impact assessments;
- discriminatory impact of AI;
- AI used in content moderation; and
- biometric identification, including facial recognition.
CDT Publishes New Report on Capabilities and Limits of Automated Multimedia Content Analysis
A new CDT study finds that state-of-the-art machine learning techniques for analysing user-generated content have key limitations that create human rights risks when used to evaluate people’s multimedia content. The report focuses on two main categories of tools: matching models, which are generally well-suited for analysing images, audio, and video, particularly where the same content is circulated repeatedly; and predictive models, which can be well-suited to analysing content for which ample and comprehensive training data is available.
Even in these scenarios, these tools have many limitations, and their effects are magnified in more challenging settings. Any applications of automated multimedia content analysis tools should consider at least the following five limitations:
- Robustness: State-of-the-art automated analysis tools that perform well in controlled settings struggle to analyze new, previously unseen types of multimedia.
- Data Quality: Decisions based on automated content analysis risk amplifying biases present in the real world.
- Lack of Context: Automated tools perform poorly when tasked with decisions requiring appreciation of context.
- Measurability: Generalized claims of accuracy typically do not represent the actual multitude of metrics for model performance.
- Explainability: It is difficult to understand the steps automated tools take in reaching conclusions, although there is no “one-size-fits-all” approach to explainability.
CDT Comments on Facebook Oversight Board’s Trump Decision
The Facebook Oversight Board affirmed Facebook’s January 7, 2021, decision to restrict the ability of then-President Donald Trump to post content on his Facebook page and Instagram account. CDT welcomed the ruling’s recognition that high-profile political figures inciting violence on social media represent a legitimate danger to the public.
However, the ruling also raised a number of important questions, and put the ultimate decision of whether Trump should be on the Facebook platform squarely back on Facebook’s shoulders. The Board gave Facebook six months to decide whether to permanently disable Trump’s account, or instead issue a time-bound account suspension. It also recommended a number of improvements and clarifications to Facebook’s existing policies and enforcement practices.
Key remaining questions to look at are: How does Facebook treat high-profile individuals, and what does this mean for other users? Will the Board ever tell Facebook that its policies violate substantive human rights standards? And, what does this mean for Trump’s account – and for other world leaders? Read CDT’s full comments here.
UN Special Rapporteur on Free Expression Releases New Report on Disinformation, Referencing CDT’s Studies
The United Nations’ Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, Irene Khan, published a new report, “Disinformation and freedom of opinion and expression.” The report examines the threats posed by disinformation to human rights, democratic institutions, and development processes.
While acknowledging the complexities and challenges posed by disinformation in the digital age, the Special Rapporteur finds that the responses by states and companies have been problematic, inadequate, and detrimental to human rights. The report calls for multidimensional and multistakeholder responses that are well-grounded in the international human rights framework, and urges companies to review their business models. It also pushes states to recalibrate their responses to disinformation, and encourages enhancing the role of free, independent, and diverse media, investing in media and digital literacy, empowering individuals, and rebuilding public trust.
CDT was delighted to see that our work was referenced in the report several times, namely our latest online disinformation study, “Facts and their Discontents: A Research Agenda for Online Disinformation, Race, and Gender.” Previously, CDT also provided comments to the Special Rapporteur in preparation of the report.