Skip to Content

Privacy & Data

Workplace Technology: Recent Policy News & Publications from Across the U.S.

NYC final rules on automated hiring/promotion tools make a flawed law even weaker

When New York City advanced a bill in 2021 governing bias audits for automated employment decision tools (AEDTs), advocates called on the City Council to strengthen the bill’s bias audit requirements and ensure that they applied to bias against any protected class under the city’s anti-discrimination laws. Instead, the city passed a narrower law that, in effect, simply affirmed that employers’ existing equal employment opportunity reporting requirements also apply when certain automated tools substantially assist in making employment decisions.

Nearly a year later, the NYC Department of Consumer and Worker Protection (DCWP) proposed rules to implement the new law that further weakened its enforcement. CDT raised several concerns with the DCWP’s initial proposal, including that the proposed rules would not cover targeted recruitment ads, that they would make it easy for employers to misrepresent the weight they assign to automated tools that significantly affect employment decisions, and that candidates would not receive timely notice with the information necessary to determine whether to seek the disability-related accommodations they may be entitled to under the ADA.

DCWP’s revised proposal made some improvements, such as clarifying that independent auditors must not have had employment relationships with or financial interests in an employer or employment agency that uses an AEDT, consider AEDTs’ impacts on intersecting categories (such as on Black female workers), and disclose the sources of data used by AEDTs and data used in bias audits. However, we found that the revised rules restricted the types of covered tools even more, disregarding tools that modify conclusions and focusing only on tools that overrule conclusions altogether. 

In other words, if an employer can claim that they don’t rely exclusively on the tool to make an employment decision, the tool likely won’t be covered under the rules. The revised rules also allow an employer to rely on bias audits that use data from other users of the same AEDT, leaving it unclear how the impacts of the employer’s own use of the AEDT would be examined. And the revised rules still didn’t provide for timely notice to affected job seekers.

DCWP’s final rules leave these concerns unresolved. The rules will go into effect on July 5, 2023.

California business lobby seeks to block enforcement of privacy protection laws

The California Chamber of Commerce filed a lawsuit seeking to delay the California Privacy Protection Agency (CPPA) from enforcing the California Privacy Rights Act (CPRA), which the CPPA was set to enforce starting on July 1, 2023. The lawsuit essentially argues that because the CPPA missed the statutory deadline for finalizing its regulations, enforcement should be delayed to give businesses more time to prepare for compliance. As detailed in a CDT analysis from late last year, the CPRA is the most significant statute in U.S. history regarding workers’ data privacy rights. CDT will continue tracking the law’s implementation if and when enforcement of the law commences.

AI Now report highlights need for workplace tech policy to focus on structural changes

AI Now released its 2023 annual report on the AI landscape, titled Confronting Tech Power. A central theme of the report is that many proposed regulation and accountability approaches for AI, which frequently center on disclosure and auditing requirements, fail to address many of the key harms associated with AI. The authors instead argue for greater use of bright-line rules that proscribe certain applications and uses of AI.

The report cites algorithmic management in the workplace as an area where such bright-line rules are most needed, given the manner in which companies use automated management systems to widen existing power imbalances in the workplace and labor market. Given those imbalances, policy approaches that depend solely on disclosure and auditing are inadequate; “[t]elling workers that their employer has used algorithmic targeting to systematically lower their wages is no substitute for enacting rules to ensure wages are set fairly and at amounts workers can live on in the first place.” The report also highlights worker organizing efforts as “a core mode of tech accountability” and thus urges policy frameworks that “provide for collective, not just individual, rights.” 

AI ethics experts publish detailed overview and critique of automated workplace surveillance

In March, AI ethics experts Merve Hickok and Nestor Maslej published an article that provides an overview of the international policy landscape for automated workplace surveillance and productivity monitoring systems. The article is impressive in scope; in addition to reviewing the existing tools and policy frameworks, it notes historical predecessors to modern automated surveillance (such as Taylorism) and discusses the ways in which automated surveillance undermines workers’ rights and dignity.

The article highlights the ways in which AI-powered surveillance tools widen existing power imbalances in the workplace. Like the AI Now report, the authors favor drawing bright lines where algorithmic systems violate fundamental rights and dignity, arguing that such tools “should not be legitimized by principles or through the use of risk management systems. They should not be used in the first place.” The article closes with a proposed policy “roadmap,” including a number of specific regulatory actions as well as a call for unions to “build better internal capacity” to organize and hold companies accountable for harmful surveillance practices.

Data & Society publishes two reports on workplace technology

Data & Society also published two reports in recent months exploring workplace technology themes. Both point to how workers’ privacy interests can come into tension with other interests of workers and consumers. 

The first report, At the Digital Doorstep, released in the fall of 2022, examines the phenomenon of e-commerce customers using networked doorbell cameras to monitor delivery drivers. Such customers increasingly engage in “boss behavior,” having “personalized expectations of service that range from polite requests to uncomfortable demands unrelated to the delivery worker’s primary job.” Even where customers view their expectations as mere preferences, “drivers—whose jobs rely on maintaining a high standing—understand them as instructions that have to be followed.”

The other report, Essentially Unprotected, examines essential workers’ experiences with health data and surveillance during the COVID-19 pandemic. The report was launched with a live event where the authors presented their key findings. One notable takeaway from the report is how workplace privacy can sometimes be in tension with workers’ other interests—in this case, the strong interest in knowing who in a workplace might have been exposed to a deadly disease that they might pass on to their coworkers. 

The report noted the difficulties that both workers and employers have had in navigating these issues during the pandemic. It also found that while most employers did not use the pandemic as an excuse to expand intrusive worker surveillance programs, Amazon was a notable exception, introducing a variety of tools ostensibly motivated by health and social-distancing goals that deepened the extensive surveillance to which Amazon already subjected its workers. This resulted in a “blurring of boundaries between health data and productivity data.”