Skip to Content

Cybersecurity & Standards

I* Newsletter: Dangers of DNS mismanagement, new AI ethics standard, bugs in our pockets

“I*: Navigating Internet Governance and Standards” was a monthly newsletter distributed by the Center for Democracy & Technology (CDT), and compiled by the Public Interest Technology Group (PITG), a group of expert technologists who work across a complex landscape of internet standards development (“I”) organizations that convene in the public interest.

The newsletter highlighted emerging internet infrastructure issues that affect privacy, free expression, and more, clearly explaining their technical underpinnings.

# Facebook outage shows danger of mismanaging DNS: On the morning of October 4, Silicon Valley titans Facebook, Instagram, WhatsApp, Messenger, and Oculus became inaccessible to the world and would remain unusable for more than five hours. The impact on 3.5 billion people and businesses who use these social media, messaging, livestreaming, e-commerce, virtual reality, and other services was staggering.

Ephraim Kenyanito of ARTICLE 19 writes, “The Facebook, WhatsApp, and Instagram outage shows the dangers of over-consolidation of internet services. In a single day, a DNS error put global communications infrastructure and the users that depend on it at risk. Such outages are a danger of misconfiguring homegrown and in-house DNS management. This incident should encourage true decentralization of internet infrastructure so that users have meaningful and robust access to information.”

# A new standard on AI ethics: According to a press release last month from the Institute for Electronics and Electrical Engineers (IEEE), the IEEE 7000™-2021 – Standard Model Process for Addressing Ethical Concerns During System Design “provides a clear methodology to analyze human and social values relevant for an ethical system engineering effort.”

The standard itself is heavily focused on procedures and management, while the policy field provides little guidance for the creators of algorithmic systems to follow. Law and policy worldwide presently put virtually no legal or policy constraints on algorithmic systems, so without regulation, transparency, and accountability, it is unclear if this standard will have its intended impact.

Scholar Niels ten Oever points out, “While the standard reflects good intention, its future efficacy is especially dubious because its guidance is weaker than both the existing ISO 26000 social responsibility standard and the United Nations Guiding Principles on Business and Human Rights.” Any attempt to mitigate AI harms through standards requires a multistakeholder and democratic approach to governance that has teeth.

# Chrome adding support for querying HTTPS DNS records by default: When a web browser is trying to connect to a website, it first needs to look up the IP address of the website name typed into the address bar. To do this, it queries for an “A” or “AAAA” Domain Name System record from the web host — basically, a mapping of a domain name to its IP address. An “A” record for google.com, for instance, might be 142.250.217.78 (depending on where you are).

Google Chrome is shipping support for querying HTTPS DNS records in addition to its current support for traditional A/AAAA records. With this change, when a user queries for a website, Chrome will also look for HTTPS records. If an HTTPS record exists for the queried website, Chrome will automatically use HTTPS instead of HTTP to make the user’s initial connection and send future traffic, which provides better privacy for the user. Google joins Apple in supporting queries for HTTPS records.

# Should tech companies below the content layer be moderating content?: Content moderation by internet infrastructure providers is returning as a topic of public debate. In recent years, multiple companies providing business-to-business (b2b) services to platforms and social media have made decisions that directly impact what content is available online. Recent examples range from Cloudflare’s 2019 choice to remove 8chan, to the recent decision of registrar GoDaddy to cease its services to a website that allowed individuals to target abortion providers. In a Techdirt piece, experts highlighted the political nature of infrastructure and existing work of civil society to steer it towards the public interest.

# Bugs in our pockets: client-side scanning and trusting our devices: Responding to arguments that the spread of cryptography has hindered access to evidence and intelligence, some in industry and government now advocate a new technology to access targeted data: client-side scanning (CSS), which enables on-device data analysis. If targeted information is detected, its existence and potentially its source would be revealed to the agencies. Otherwise, little or no information would leave the client device. A new report argues that CSS neither guarantees efficacious crime prevention nor prevents surveillance, but instead creates serious security and privacy risks for all society — including the chilling effects produced by mass surveillance — while providing limited and problematic assistance for law enforcement.

The academic paper comes at a critical time. It supports the points made by more than 90 civil society organizations, who requested that Apple halt its controversial proposal to detect child sexual abuse material on iPhones. Apple’s proposal included installing surveillance software on existing iPhones, iPads, and other Apple products, which would conduct on-device scanning. Institutions and organizations working in the public interest appear to have reached a consensus: client- or device-side content moderation techniques are dangerous for privacy.

The Center for Democracy & Technology’s Chief Technology Officer, Mallory Knodel, says, “When we sign up for a service or purchase a device, we might not realize that our devices can be irrevocably altered. Service providers, internet infrastructure companies, and now operating system developers are increasingly blurring the lines between servers and endpoints like user devices, with implications for what we can reasonably consent to, and the way we conceptualize our trust in our devices.”

# Bounce tracking and how it’s affecting privacy online: Privacy-preserving tools developed at the World Wide Web Consortium (W3C), such as third-party cookie blocking, ensure that websites can’t collect personal information about users without their consent. But now, with “bounce tracking,” third parties can circumvent the cookie blocking built into many browsers and view users’ web traffic despite their attempts to keep it private.

Current implementations of anti-tracking tools generally still allow sites to store their own cookies in order to remember repeat or authenticated visitors, but request sites to set time limits on the storage of their cookies. Sharing third-party cookies, which originate from parts of the web other than the site a user is visiting, is restricted so as to limit tracking and exploitation of user activity data. But with bounce tracking, website A can track a user’s activities when they move to website B by assigning a unique identifier to the user, and then redirecting the user’s request for website A to an intermediary site that has been set up to track the user’s behaviour. That tracking site then issues a first-party cookie that will gather the user’s browsing information before transferring them to the intended destination, website B. 

It is currently difficult to stop the practice since the scheme leverages first-party cookies, which the browser cannot block. Therefore, the transfer of the user’s browsing data to a third party is disguised.

Discussions on tracking prevention controls, and how to solve the problem of bounce tracking, are ongoing at the W3C.

# Civil society advocates encourage EU to pay better attention to technology standards: In a new paper, public interest technologists Amelia Andersdotter and Lukasz Olejnik leverage their experiences in the technical community to make a case for how Europe might better approach technical standards. They argue that it should develop a consistent and long-term strategy that is harmonized across the continent and the array of internet standards development organizations; strengthen the adoption of voluntary technology standards; and assert its values — including human rights — in standards fora.

Europe could, for instance, adapt “technical solutions to nominal, but abstract, legal requirements on communication technology providers in the field of lawful interception.” Legislators would not specifically regulate “any technical details, but through the participation of appropriate law enforcement authorities in standardisation processes… the technology [would still be] shaped by invocations of laws on the book.”

Co-author Amelia Andersdotter says, “EU technology policy is full of high-level rhetoric on European values, and low-level means of enforcement like prescriptive laws with high fines — but an entire middle layer that would translate between the two is missing. The EU doesn’t trust its citizens and NGOs to make this translation, or allow itself to benefit from developments in industry that have shown their merits. We believe the EU could address the gap between the intention and reality of its tech policy, and have created a roadmap for how it can go about that.”

# This month’s side note: While looking for possible reasons for the Facebook outage on October 4, Daniel Kahn Gillmor of the ACLU noted, “Without this failure I would never have noticed that their IPv6 authoritative nameserver addresses have 4 octets in them that spell out :face:b00c: in hex.” Well, now we know.