Skip to Content

AI Policy & Governance, European Policy, Privacy & Data

Privacy Professionals Reflect on the GDPR One Year Later

It has been approximately one year since the European General Data Protection Regulation (GDPR) went into effect, and privacy professionals around the world continue to grapple with its impact. Recently, the CDT Europe team had the opportunity to participate in two events geared towards privacy professionals that looked at what has been accomplished under GDPR and what gaps still exist. Here are some of the highlights from those events.   

“State of the art” solutions in “privacy by design”

The first event was held in Rome, hosted by the European Data Protection Supervisor (EDPS) through the Internet Privacy Engineering Network (IPEN). The workshop was focused on “state of the art” solutions in privacy by design.

The principle of privacy by design, included in the GDPR, has challenged businesses and developers. On one hand, this principle is intended to be a safeguard for data subjects, because it requires data controllers to choose the “better” standard rather than the cheaper one. On the other hand, businesses are struggling with several questions: Who decides what the “state of the art” is? Who is responsible for that choice? Not all data controllers are tech giants, or even tech-savvy, and most are non-tech small- and medium-sized enterprises that often rely on external service providers for their information technology (IT) systems. Larger companies can depend on their data protection officers (DPOs) to choose trustworthy and professional IT service providers. As GDPR is further implemented, it’s clear that the European Data Protection Supervisor will need to look for different solutions that fit smaller firms as well.

Anonymisation and pseudonymisation 

Privacy by design solutions often include the use of anonymisation and pseudonymisation techniques. However, in some cases, they are not the right answer. Where the methods adopted are not secure enough, an entire dataset could be easy de-anonymised. That was the case for Netflix in 2006. On the other hand, if a dataset passes through a higher level of anonymisation, the resulting data may be useless. Therefore, it is necessary to strike the right balance between the degree of anonymisation and the effectiveness and residual utility of data, as highlighted by Maurizio Naldi, Associate Professor of Computer Science at the University of Rome Tor Vergata. However, the choice of technology to adopt should not follow a top-down approach. “Data Controllers have to engage stakeholders, citizens, people,” said Professor Mantelero, from University Politecnico of Turin.

GDPR is a principle-based regulation, and does not provide any magic to-do list. Therefore, regulators, DPOs, and privacy professionals have to develop good practices and guidelines as the tech industry and its business models evolve. 

Some companies have even started to see the GDPR as a selling point rather than a weakness. There are companies that decentralise the practice of ad matching on users’ devices. They still sell advertising, but they avoid sharing the information with third parties, and the user will only see twenty ads per day. In exchange, companies compensate users for their attention. However, the idea of rewarding users in exchange for consenting to be targeted by ads needs further analysis. 

Rules and the data market

Consent was one of the topics discussed at another panel on “rules and the data market,” moderated by CDT. The event was hosted by the privacy lawyer Rocco Panetta, IAPP Country Leader for Italy, within the Legal Community Week in Milan. According to Panetta, consent to data processing should not imply that data is an asset that data subjects own and can give away. The GDPR says that consent has to be freely given, and offering remuneration in exchange for it could weaken that choice. On the other side, someone could say that this kind of business model points out how valuable data is to advertisers, and could contribute to raising users’ awareness that most of the web services they use “for free” are not free at all. 

What emerged at the panel was that concerns with some emerging technologies, such as artificial intelligence or facial recognition, could slow consumer adoption if they are not addressed. CDT polled the audience of privacy professionals to gauge their confidence in facial recognition technology and found that using it to provide social scores, as is happening in China, generally received a negative response. On the other hand, when the same technology is used to unlock a device with the assurance that the biometric data is stored on the device, and is not registered or transmitted to an external server, the audience was less reluctant to adopt that technology.

There was a general consensus that many of these technologies can be invasive and deceptive. Location data, for example, can be easily used maliciously. Ad controls are generally too complicated or hidden to be easily managed by users, which highlights the need for an increase in privacy by design technologies.

As with many principle-based laws, there is quite a bit left to be interpreted and analysed before we can reach a conclusion on the efficacy of GDPR. It was positive to see that more and more companies are viewing privacy and privacy by design approaches as business differentiators, but without question there remain gaps in providing strong privacy protections to consumers. CDT will continue to advocate for stronger privacy protections globally, and is leading the way on comprehensive privacy law in the United States, with a goal of establishing clear uses of data that should always be off-limits. GDPR has clearly been a positive first step, yet we have more work to do.