The 2024 election cycle saw artificial intelligence deployed by political campaigns for the very first time. While candidates largely avoided major mishaps, the tech was used with little guidance or restraint. Now, the National Democratic Training Committee (NDTC) is rolling out the first official playbook making the case that Democratic campaigns can use AI responsibly ahead of the midterms.
In a new online training, the committee has laid out a plan for Democratic candidates to leverage AI to create social content, write voter outreach messages, and research their districts and opponents. Since NDTC’s founding in 2016, the organization says, it has trained more than 120,000 Democrats seeking political office. The group offers virtual lessons and in-person bootcamps training would-be Democratic politicians on everything from ballot registration and fundraising to data management and field organizing. The group is largely targeting smaller campaigns with fewer resources with its AI course, seeking to empower what could be five person teams to work with the “efficiency of a 15 person team.”
“AI and responsible AI adoption is a competitive necessity. It’s not a luxury,” says Donald Riddle, senior instructional designer at NDTC. “It’s something that we need our learners to understand and feel comfortable implementing so that they can have that competitive edge and push progressive change and push that needle left while using these tools effectively and responsibly.”
The three-part training includes an explanation on how AI works, but the meat of the course revolves around possible AI use cases for campaigns. Specifically, it encourages candidates to use AI to prepare text for a variety of platforms and uses, including social media, emails, speeches, phonebanking scripts, and internal training materials that are reviewed by humans before being published.
The training also points out ways Democrats shouldn’t use AI, and discourages candidates from using AI to deepfake their opponents, impersonate real people, or create images and videos that could “deceive voters by misrepresenting events, individuals, or reality.”
“This undermines democratic discourse and voter trust,” the training reads.
It also advises candidates against replacing human artists and graphic designers with AI to “maintain creative integrity” and support working creatives.
The final section of the course also encourages candidates to disclose AI use when content features AI-generated voices, comes off as “deeply personal,” or is used to develop complex policy positions. “When AI significantly contributes to policy development, transparency builds trust,” it reads.
These disclosures are the most important part of the training to Hany Farid, a generative AI expert and UC Berkeley professor of electrical engineering.
“You need to have transparency when something is not real or when something has been wholly AI generated,” Farid says. “But the reason for that is not just that we disclose what is not real, but it’s also so that we trust what is real.”
When using AI for video, the NDTC suggests that campaigns use tools like Descript or Opus Clip to craft scripts and quickly edit content for social media, stripping video clips of long pauses and awkward moments.