Skip to Content

CDT Generative AI Usage Policy

Generative AI aggregates human knowledge and makes it accessible in unprecedented ways. If used carefully and strategically, it could be a powerful tool, allowing us to be more efficient, more creative, and help us reach new audiences with our work. If used carelessly, though, it could degrade the quality of our work, disempower our staff, and inadvertently plagiarize others. In order to gain the benefits of generative AI while avoiding its pitfalls, we have created this Generative AI Usage Policy to provide guidance to our staff, external vendors, and the public about how our organization will and will not use generative AI systems.

Generally speaking, we are open to and interested in our staff experimenting with generative AI. We want to do our work as well as possible, and generative AI may help us do that. In addition, experimentation can improve our understanding of the technology and thereby strengthen our advocacy about AI. However, we also want to safeguard against the temptation to use generative AI tools to produce low-effort, low-quality work, and ensure that we use this technology in a way that is consistent with our values as an organization. Furthermore, we remain dedicated to being a human-first organization. We want to ensure that we use generative AI to support our staff doing their jobs, not replace them; accordingly, we will never require staff to use generative AI to complete their work. 

In this policy, we delineate between original works, in which we express new content and ideas, and derivative works, in which we summarize our own original works for different contexts or audiences. With original works, we more strictly limit ourselves in our use of generative AI, since the risks posed by inadvertent plagiarism, factual misstatements, and bland prose are higher. With derivative works, we are more open to teams using unaltered or minimally altered (though not unreviewed) outputs from generative AI systems, so long as the input primarily comes from original CDT work. This delineation reflects what we see as limitations in current technology. As generative AI evolves, we will continue to update this policy accordingly to reflect its capabilities and potential use cases.

In producing original works (e.g. reports, amicus briefs, public statements), we may potentially use generative AI as:

  • A brainstorming tool, to find inspiration, generate starter ideas, or overcome fear of the blank page.
  • A research tool, to find new sources and summarize information. The onus remains on the author to ensure the accuracy of these outputs.
  • An editor, to improve or condense the phrasing of drafted text.
  • A super-charged thesaurus, to figure out a word or sentence to phrase an idea or concept.
  • A rote worker, to complete routine tasks such as formatting citations or adhering to style guidelines.
  • A tool to create information visualizations. Any AI-generated visualization will always be explicitly credited as such.

In producing derivative works (e.g. social media posts, newsletters, grant reports), we may also potentially use generative AI to:

  • Create short descriptions or summaries of our original work for outputs, such as campaign summaries and solicitation materials.
  • Suggest report titles, article headlines, or text for social media content describing our work.
  • Create slideshows and other presentation materials for our work.

CDT staff may also explore using generative AI for CDT’s internal processes, such as meeting notes summarization or internal document organization.

Any uses of generative AI to produce external-facing CDT outputs beyond those mentioned above will be considered on a case-by-case basis. At a minimum, such uses must be discussed with and approved in advance by a manager, and outputs should be appropriately labeled as AI-generated. This includes AI-generated illustrations, which should not be used to substitute for the work of a human illustrator except in exceptional cases.

Externally or internally, we will adhere to certain universal practices in all of our uses of generative AI:

  • CDT staff are fully responsible for the quality and accuracy of their work product. We expect staff to thoroughly review all AI-generated content they use in any outputs, including verifying factual statements and citations.
  • Contractors that work with CDT must comply with this policy. In addition, we will ask contractors to document and disclose to CDT staff how they used generative AI in their work so we can ensure their compliance with our policy.
  • We will not input private, sensitive, or confidential information into any generative AI system absent specific guarantees regarding the system’s privacy and data security practices that have been reviewed and approved by management. This includes any data collected through human subjects research (surveys, interviews, etc.) or information provided under NDA. As a rule of thumb, any information that would violate confidentiality obligations or cause embarrassment or reputational harm to CDT if it were publicly released should not be entered into a generative AI system.

This Generative AI Usage Policy v1.0 was published on February 21, 2024. If this policy is updated, you will be able to find a link(s) to previous versions on this page. A permanent link to this version of the policy is available: https://perma.cc/BEQ6-8RL8.