Since the first accessible artificial intelligence chatbots hit the public six months ago, machine learning has taken over the national conversation with the relentless pace of a Terminator chasing Sarah Connors. Will generative AI prove to be transformative technology, a flash in the pan, or the first step on a road to dystopia and doom? We’ll find out soon enough.
Naturally, political professionals are already applying generative AI tools like ChatGPT to their work. Some firms are building specialized platforms to enable them to employ AI at scale, while others are incorporating it into staff’s personal work processes as another source of text or ideas. But as AI infiltrates the political consulting and lobbying worlds in the months to come, will customers pay for the privilege of having someone else use a chatbot to crank out press releases on their behalf?
The Political Promise of Artificial Intelligence
Back in April, we ran through a bunch of potential uses for AI in politics, both for electoral campaigns and for political advocacy. Chatbots can already crank out text for fundraising emails, press releases and even work plans for anyone who sets up an account and types in the right query. Other AI tools have been helping target our ads and identifying voters with specific demographic or behavioral characteristics for some time.
I’ve learned recently about more edge-case approaches, such as feeding survey data into a bot to enable it to build virtual voter models. These would then allow us, at least in theory, to ask the AI “get in character” before it reviews content or answers polling questions. If AI could take the first pass at evaluating a few hundred potential Facebook posts before they go live as ads, for example, campaigns and advocates could save time and money spent on their early rounds of A/B testing.
But as we also discussed weeks ago, the current generation of public AI tools is far from perfect. From persistent “hallucinations” to garbage-in/garbage-out problems with the ways the platforms are “trained,” chatbots can fool people who use them unwarily.
At least for now, we need to check an AI’s work against other sources — and against political common sense. Polling an AI’s “mental model” of a voter niche may turn out to predict actual human behavior pretty well, but it may give answers wrong enough to screw up your messaging. The best part? You may not know the difference without something else to compare the results to.
More Barriers to Political AI
Political AI faces a few more challenges in its attempt to dominate the business of politics. Publicly available chatbots like ChatGPT and Google’s Bard hog the spotlight, but they’ve typically been trained on data sets including vast amounts of copyrighted material, potentially ensnaring the platforms in litigation right when you need them. They may be off-limits for their own reasons, too. ChatGPT owner Open AI recently updated its Terms of Service to prohibit political campaign activity, at least in bulk. As a result, the company slapped down DC political vendor FiscalNote for including ChatGPT in its advertising and marketing.
As Authentic Campaigns founder Mike Nellis reminded me, AI projects built on public platforms will always run the risk that the rules could change faster than you can say “dead business model.” Remember when Facebook decided that no new political ads could run in the week before an election, or Google informed us that we couldn’t use voter-file targeting? Something similar could happen if you build a tech stack on a machine-learning foundation you don’t control.
Embracing the Machine, With Care
At Quiller.ai, Nellis is building content-generation tools for Democratic campaigns and organizations based on machine-learning technology licensed from a provider and set up uniquely for the project. His team has fed the system curated content such as fundraising emails, text messages and social media posts to prime it to produce text that reflects how the left talks, works, and communicates with donors and voters. They’ll also include safeguards for information like internal polls and memos that users put into the system when they’re asking it to create content tailored for their political races.
Henri Makembe of Do Big Things suggests that such private AI environments may become as standard a political business tool as cloud storage. Perhaps relatively soon, “just like you have a private cloud, you’ll have a private AI” that incorporates your own privacy and security safeguards, custom tools and data sets. But for now, staff at Makembe’s shop are employing AI for brainstorming and idea-generation, not for client-facing material. That is, until the firm develops policies to protect clients’ data, ensure transparency and address other potential pitfalls.
For example, what happens if your chatbot-derived text in a client’s speech or on their website turns out to have been ripped directly from someone else’s work? Plagiarism has derailed more than one political campaign, and consultants should work out policies and processes — in advance — to make sure an AI doesn’t force you to write a client’s concession speech, too.
But Will They Pay for It?
Consulting and tech are both businesses, of course, and ultimately someone has to pay for all of the fancy robots. Naturally, political campaigns and interest groups could theoretically freeze the hired guns out of the game entirely if they tried hard enough. Why pay for a press package when AI can crank them out for free?
As Makembe notes, “no tool is free,” since everything we do has an opportunity cost at the very least. For a start, people have to learn how to employ chatbots correctly to produce content that’s actually usable in a political setting. On its own, Nellis says that “a chatbot doesn’t know what a good fundraising email is.” We have to prompt the programs with contextual information and content samples, and figuring out how to do that takes time and a certain amount of comfort with technology. In practice, the barriers to adoption are higher than they may appear at a glance.
Of course, someone will eventually publish template queries designed to produce reasonably effective fundraising emails or peer-to-peer texts, which any campaign could in theory copy and adapt to their own circumstances. But many down-ballot campaigns struggle with basic tech tasks like setting up Gmail, and I doubt that all of them will be rushing to converse with Bard right this minute.
Facebook again provides a good analogy, Makembe tells me. Anyone can set up and run Facebook ads once they’ve verified their identity, including people who run for office or work on political campaigns. Still, campaigns and organizations pay consultants to create Facebook ads for them all the time. Staff may not have the expertise or the time to run a successful digital advertising program, and consultants also bring the benefits that come from having solved similar problems for other clients. So once again, AI isn’t putting us out of work. Not just yet.
Colin Delany is founder and editor of the award-winning website Epolitics.com, author of the new 2023 edition of “How to Use the Internet to Change the World – and Win Elections,” a veteran of more than twenty-seven years in digital politics and a perpetual skeptic. See something interesting? Send him a pitch at cpd@epolitics.com.