Skip to Content

AI Policy & Governance, CDT Research

Technology as Policy: Hidden Rules and How to Reveal Them

Jenny L. Davis, Professor of Sociology, Vanderbilt University

Disclaimer: The views expressed by CDT’s Non-Resident Fellows are their own and do not necessarily reflect the policy, position, or views of CDT.

I know we’ve moved on and a new contest looms, but for a moment, revisit the June ’24 Trump-Biden debate. Put aside the mistruths and political fallout. That story is stale and already overtold. Recall instead the anomaly of the night, which was a remarkable adherence to rules and procedures. When Trump and Biden debated in 2019 they interrupted each other 96 times. In 2024, nobody interjected and everyone waited their turn. This disciplined display is the likely new standard. Yet its cause can’t be traced to evolutions of conscience or political culture. Rather, credit goes to a simple audio adjustment.  

Fully implementing a tactic first trialed in the previous US election cycle, the candidates’ microphones were only functional during their designated speaking periods. This meant just one speaker was audible at a time. In place of candidate compliance or moderator enforcement, regulation was outsourced to a mute button. The same was true when Trump and Harris took the stage.

This mute-button-mandate illustrates a basic insight from scholars of technology in society, which is that technology itself is policy. Technology-as-policy refers to the way technologies, through their design, shape social behaviors and outcomes.  Written rules and regulations codify what can, should, must, and cannot be done. Technologies do the same through their material form—preventing, confining, persuading, and compelling. 

The Transparency Problem

Framing technology as policy highlights the power of technical systems upon social life. In doing so, a transparency problem arises: as policy instruments, technologies are imprecise and inscrutable.  

Imagine a scenario in which policies were unwritten and implicit, regulating people and organizations through subtle mechanisms of control. Imagine this control diffusing throughout societal spheres affecting economic relations, political processes, domestic mundanities, and intimate interactions. Imagine codes cast upon publics who had, at best, a vague sense of those impositions which remained tacit and thereby difficult to pinpoint, let alone contest. Technologies inflict this kind of quiet control every day, structuring our lives without stating what rules are in play, whose values they reflect, or how they configure social practices.   

Clearing the Fog: An ‘Affordance’ Approach

Over nearly a decade, I’ve been constructing and applying a framework to render technical systems more transparent and understandable. That work centers around the concept of ‘affordance’, or how technologies enable, constrain, and set parameters of possibility. I wrote a conceptual paper on the topica book on the topic, and most recently, applied an affordance lens to machine learning technologies (ML). Across sectors, but especially in those that are hardest to penetrate due to technical and social opacities, specifying affordances is a political project of elucidation.

My book, titled How Artifacts Afford: The Power and Politics of Everyday Things, presents the “mechanisms and conditions framework of affordances.” This framework operationalizes the relationship between technical features and social effects. The mechanisms of affordance are represented by a simple series of descriptors: request, demand, encourage, discourage, refuse, and allow. These indicate varying degrees of force exerted by a technology. For example, muted microphones on a debate stage demand adherence to turn-taking, whereas always-on mics allow interruption. These mechanisms are not absolute, but conditioned by social variables. For instance, a booming voice diminishes the demand into a request, unless rule violations are harshly penalized (e.g., disqualification), in which case the demand holds regardless of vocal capacities. (This information is also shared in an explainer video.)

Warehouse Work: A Case Example

My most recent work applies the mechanisms and conditions framework to machine learning (ML) technologies and the systems of which they are a part. This work considers what kinds of policies ML technologies enact by design. Elucidating ML is an urgent matter, as these technologies are increasingly integrated into personal and public life, yet remain murky in their operations due to impenetrable math and proprietary protections.

Against this opacity, the mechanisms and conditions framework interrogates and articulates how ML applications reflect and affect socio-cultural patterns, for whom, and to what effect. Let’s run through an example of data driven warehouse work, borrowed from my 2023 paper on affordances for ML.  This example is based on Amazon Inc.’s fulfillment centers, which act as anchor points for today’s on-demand consumer economy.  

Documented by journalists and researchers, Amazon warehouse work is fast-paced, tightly controlled, closely observed, and data-driven. Machine learning underpins nearly all workplace procedures. Data from physical sensors, digital monitors, GPS trackers, product scanners, customer ratings, and management evaluations—among many other data sources—process through ML algorithms to set a pace of work, break that work into granular pieces, dictate workers’ movements, schedule workers’ shifts, and hire or fire workers according to productivity metrics paired with fluctuations in supply and demand.

The mechanisms and conditions framework clarifies how ML sets workplace policies. For example, GPS tracking encourages worker surveillance while feeding into data-hungry ML models. Those models in turn demand compliance with automated directives. These directives deskill warehouse work and thus allow management to replace any worker who misfits with corporate priorities.  Data-driven rate-setting requests a standard speed of physical movement—discouraging (or refusing) bodies that are ill, disabled, aging, or tired. Management, too, becomes data-dependent and disposable, affixed to systems that demand attention to metrics, discourage professional discretion, and request commitment to corporate goals, even when those goals conflict with their own and their colleagues’ interests.

The policies of Amazon warehouse work are technologically instilled, with ML models processing data into decree. Yet the shape of things can change. Just as the mechanisms and conditions framework reveals what is, it can also suggest alternatives. Drawing from US workers’ union movements, we might reimagine warehouse conditions in which only products, but never people, are tagged and tracked, demanding the monitor of goods while discouraging infrastructures of surveillance; we might recalibrate work rates, allowing bodily diversity while encouraging employee wellness; and we might envisage scheduling algorithms that incorporate worker needs, setting shifts in advance while offering live swaps between workers who need personal time or paid hours, respectively, requesting a balance of stability and flexibility for whole human beings with competing obligations. 

Beyond the Warehouse Walls

Algorithmic governance of warehouse work is but one case example among many possible others—think dating apps, résumé sorting, insurance approvals, and domestic bots. Across social spheres, the mechanisms and conditions framework bolsters transparency, surfacing techno-policies so they can be scrutinized, challenged, reimagined, and remade. This is a general-purpose framework, ready for application across many and diverse domains.