Skip to Content

AI Policy & Governance, Privacy & Data

New WGA Labor Agreement Gives Hollywood Writers Important Protections in the Era of AI

Following up on my previous post, my #15 molar is gone (long story), as is the first of the two Hollywood strikes that drew the attention of entertainment industry watchers and workers’ advocates alike this summer.

The Writers’ Guild of America (WGA) announced that they reached an agreement on the terms of a new contract with the major Hollywood production studios. That contract contains what may be the first major union/management agreement regulating the role of artificial intelligence (AI) across a particular industry.

The type of AI at issue during the writers’ strike is generative artificial intelligence (the contract shortens this to “GAI,” so I will use that abbreviation here), which the proposed WGA contract describes as “a subset of artificial intelligence that learns patterns from data and produces content, including written material, based on those patterns, and may employ algorithmic methods (e.g., ChatGPT, Llama, MidJourney, Dall-E).” 

Hollywood writers were worried that studios would use GAI to drive down wages by depriving them of creative assignments and writing credits. The proposed agreement—which has been submitted to WGA members for ratification—greatly circumscribes the role that GAI can play in the writing process.

Here are the key features of the agreement as it relates to AI, as laid out in the official WGA summary, along with my comments (WGA summary in italics, my comments in regular font):

  • AI can’t write or rewrite literary material, and AI-generated material will not be considered source material under the [agreement], meaning that AI-generated material can’t be used to undermine a writer’s credit or separated rights.

So, for example, even if a studio or an individual screenwriter uses GAI to generate an idea for a new film, credit for coming up with that story cannot go to the AI system or whoever developed or deployed it. The story’s genesis, as far as the contract is concerned, will instead start no earlier than the first time a human has creative contact with it. This is important for writers because receiving credit only for writing a screenplay usually entails a lower level of compensation and legal rights than receiving credit for both the story and the screenplay.

  • A writer can choose to use AI when performing writing services if the company consents and provided that the writer follows applicable company policies, but the company can’t require the writer to use AI software (e.g., ChatGPT) when performing writing services. 
  • The Company must disclose to the writer if any materials given to the writer have been generated by AI or incorporate AI-generated material.

As described in Eryk Salvaggio’s commentary on the agreement in Tech Policy Press, this gives writers the bulk of authority (who decides whether/when AI is used) and agency (who decides how AI is used) in the screenwriting process.

If writers choose to use GAI, they must follow studio policies. While the contract says that a company cannot initiate the use of AI or dictate how a writer uses it, it is not clear how a studio would be able to enforce a policy prohibiting the use of AI by writers, given the increasingly easy availability of textual GAI.

  • The WGA reserves the right to assert that exploitation of writers’ material to train AI is prohibited by [the agreement] or other law.

The agreement punted for another day the issue of whether studios (or others) can use writers’ existing written materials to train GAI systems. Since production studios are increasingly part of sprawling media conglomerates that also have music and print publishing rights—and given the proliferation of GAI tools that train on Internet-accessible information—one could easily imagine this issue arising at some point in the not-too-distant future. 

However, the WGA may have chosen not to prioritize potential downstream opportunities and threats to writers’ rights because the contract secured the writers against the more imminent threat: that studios would use AI to directly undercut writers’ credits and compensation. Moreover, as the agreement states, “the legal landscape around the use of GAI is uncertain and rapidly developing,” and both the writers and studios may not have wanted the contract to delve into an area with significant legal uncertainty.

The issues at stake in the remaining Hollywood strike—the SAG-AFTRA strike involving actors and other performance artists—differ somewhat from those covered by the writers’ strike. If media reports are accurate, SAG-AFTRA’s concerns center more on unauthorized use (and reuse) of actors’ likenesses, voices, and so forth—using technology to repurpose something that already exists, in a sense, rather than using it to create something new. Impressionistically, actors actually start with a stronger hand than the writers did on their respective AI issues since commercial use (or reuse) of a person’s likeness is generally unlawful absent specific permission from that person. 

There is no such clear, preexisting legal prohibition against, say, a studio using AI to write a screenplay. Against that legal backdrop, the WGA agreement includes an impressive amount of protection for screenwriters in the face of a new wave of automation—and it will be interesting to see whether SAG-AFTRA is able to secure equal or greater protection for its members.

As commentators have noted, the WGA agreement could serve as a model for future labor agreements involving GAI and automation more generally.  At the very least, the resolution of the WGA strike undercuts the veneer of inevitability that often seems to surround discussions of workplace automation, and shows that collective bargaining can be an effective means for workers to advance their interests when automation threatens their livelihoods.