Some big firms with the power to harness generative AI tools at scale in 2024 are treading lightly for fear of breaching the companies’ rules for political usage and facing bans as a result.
Speaking at a panel hosted by The George Washington University’s School of Media and Public Affairs in partnership with C&E on March 26, GOP consultant Alisa Brady described her firm’s use of tools like Meta’s Llama 2 as throwing “spaghetti at the wall.”
“We’ve had a lot of success because we’ve thrown some spaghetti at the wall and it has stuck,” said Brady, a managing director at Republican firm Targeted Victory. “And then we’ve iterated from there.”
Despite the experimental nature of the company’s engagement with generative AI tools for things like content creation for fundraising emails or ads, they’re still following the platform companies’ usage policies.
“We’re extremely aware of what the landscape [is] — developer by developer, model family by model family, platform by platform,” she said. “Making sure that we are where we need to be for this election cycle so we aren’t creating any undue risk for our clients or ourselves.”
Other shops on the right have also been using AI tools internally to help iterate versions of content.
“Basically to learn the voice, and so that way we can generate that volume,” Courtney Weaver, an EVP at GOP shop IMGE, said of her shop’s AI experimentation.
“I think with AI being able to give us that edge of volume and quickness will prove to be really interesting in terms of win or loss — how did that impact overall fundraising input or output, or just the amount of creative that we’re getting out?,” she added at C&E’s Reed Awards & Conference in Charleston, SC on March 21.
But there’s another reason why Targeted Victory’s team doesn’t want to run afoul of one of the generative AI providers: they’re all talking to each other.
“They’re reporting cross platforms,” she said during the panel. “I didn’t want to get blacklisted on OpenAI’s platform and then they report me to Meta and I can’t use Llama 2, or something like that.”
In fact, it’s that cross reporting which Crystal Patterson of Washington Media Group believes will help curtail nefarious activity from disinformation campaigners this cycle.
“I do think the social media companies are best positioned to see what’s coming,” said Patterson, who spent nearly eight years on Facebook’s DC policy team.
Moreover, if generative AI was going to be used out-of-bounds this cycle, it will likely be done by a small actor in the space without as much to lose, predicted Patterson, pointing to the AI generated voice of President Biden used in a New Hampshire robocall in February that was traced back to a New Orleans-based magician.
“I’m sure we’ll see some crazy stuff,” she said.