Last month, Annette Bernhardt, Lisa Kresge, and Reem Suleiman of the UC Berkeley Labor Center released an important report, Data and Algorithms at Work: The Case for Worker Technology Rights. The report’s publication is timely, with the Massachusetts legislature currently considering comprehensive information privacy legislation that includes strong worker protections, and most of the California Consumer Privacy Act’s (CCPA) employee protections set to take effect in 2023. The widespread expectation is that both companies and worker advocacy groups will propose workplace-specific legislation that would supersede the CCPA’s provisions, which were drafted primarily with consumers rather than workers in mind.
At the core of the report is a policy framework that lays out a combination of broad principles and specific proposed rules that could serve as the backbone of general legislation on electronic privacy in the workplace. At a high level, the framework operates by creating a general expectation that workers retain a base level of privacy in their data when they enter the workplace, and that employers must demonstrate a specific need before they can intrude upon a worker’s privacy by collecting worker data or engaging in electronic surveillance.
Even when an employer can show that data collection or surveillance is justified, employers must ensure that the data collection or surveillance is no broader than is necessary to achieve their legitimate objectives. Moreover, an employer must avoid collecting data or monitoring workers in ways that would harm workers, even if the collection or monitoring would otherwise be justified.
Data and Algorithms at Work is an invaluable addition to the growing ecosystem of scholarship and policy work in the worker privacy space. This blog post provides a brief overview of the principles and themes that run through the report’s policy framework.
Framework for Workplace Technology Rights
The framework, if adopted, would take a fundamentally different approach to workplace technology and privacy rights by flipping the set of expectations that currently exists around the use of technology in the workplace. Under present US law, nearly all the discretion and agency over the use of workplace technologies lies with the employer. A worker’s expectations of privacy are largely left at the workplace door; employers can collect their data and monitor their activities in the workplace without allowing workers to have access to those data, much less input into how those data are used. Likewise, companies generally can automate key employment decisions and delegate traditional managerial functions to algorithmic systems, so long as they do not run afoul of general laws such as those regarding employment discrimination, wages, hours, and working conditions.
Under the report’s framework, this paradigm would be reversed. Workers would have agency over their own data, including a right to access, correct, and download data that employers collect about them. Workers and labor organizations would have the right to organize and bargain around the use of data-driven technologies in the workplace and, to that end, unions would have access to “the information needed to fully understand the nature, scope, and effects of data-driven technologies used by the employer.” And employers would have to ensure that human decision-making remains at the core of key employment decisions and that workers understand and have an opportunity to challenge those decisions.
The framework would make this reversal concrete by imposing three sets of requirements for employers wishing to engage in data collection, monitoring, or other practices involving processing of worker data.
Requirement 1: Worker data collection and electronic monitoring can only be done when necessary to achieve a specific, legitimate purpose
The framework would limit both data collection and electronic monitoring–two workplace practices particularly antithetical to worker agency over workplace technologies–to an “as-needed” basis. That is, employers would be able to engage in those practices only when needed to achieve certain specified purposes, such as:
- Collecting data when it is “necessary and essential for workers to do their jobs”
- Engaging in electronic monitoring if it is “strictly necessary” to:
- Enable core business tasks;
- Protect the safety of workers; or
- Comply with legal obligations
Even if an allowable purpose exists, employers would be able to engage in collection or monitoring only to the extent that it was actually necessary to fulfill that purpose. Thus, for example, the framework says that “monitoring should affect the smallest number of workers possible, should collect the least amount of data necessary, and should be the least invasive means for accomplishing its purpose.”
Requirement 2: Notice and disclosure
The framework would require employers to provide workers with notice of all data collection and electronic surveillance used in the workplace. If a consequential employment decision is informed by data collection or electronic surveillance, the employer would also need to provide “full documentation” regarding the basis for the decision and give workers an opportunity to challenge the decision.
These requirements are unqualified – workers would retain the right to be informed and to challenge new workplace technologies regardless of the strength of an employer’s need to deploy those technologies.
Requirement 3: Refraining from harmful practices
Even if an employer satisfied all the requisites listed above, it still would have to refrain from deploying workplace technologies in ways that would harm workers. The report lists a number of specific types of harm:
- Work intensification and health and safety harms (Note: CDT released its own report this summer on the health and safety harms of “bossware,” a term describing particularly intrusive worker surveillance and algorithmic management systems)
- Deskilling and job loss
- Lower wages and less economic mobility
- Contingent work
- Suppression of the right to organize
- Loss of privacy
- Loss of autonomy and dignity
- Threatening workers’ health or safety
- Making “irrelevant or unfair predictions about workers”
The framework also proposes that employers be required to conduct impact assessments of new workplace technologies to “evaluate the full range of potential harm to workers,” including discrimination and threats to workers’ health, safety, privacy, and economic position. If a practice is identified as harmful, the employer would either have to mitigate the harm or stop the use of the system.
This proposed framework is both simple and powerful. The framework’s general approach could easily be applied to the regulation of a wide variety of workplace practices. Indeed, the requirements listed above are reminiscent of some existing workplace legal standards, such as the elements of disparate impact discrimination under US law and the European GDPR’s approach to workplace data collection.
The framework would need greater specificity before it could serve as a basis for actual legislation or regulation but, like the Civil Rights Principles for Hiring Assessment Technologies, the principles it articulates can help inform future civil society efforts to shape workplace technology policy.