Artificial intelligence is coming for your next campaign, ready or not. At the least, people who want to sell AI to your campaign are coming, and it behooves us to be ready to ask the right questions.
Let’s take a realistic tour of the ways machine learning is likely to change how political and advocacy campaigns do business, and how it (probably) won’t.
Predicting is Hard
First, a caveat. We’re so at the early stages of so-called artificial intelligence that most predictions are likely to miss the mark. When researchers started networking computers together in the late ’60s, I doubt that any of them anticipated the streaming wars, social-media bullying or the ubiquity of online porn. Similarly, AI-based tools will surely open up possibilities that we can now see vaguely at best, and their knock-on effects will have knock-on effects of their own.
Still, as I wrote back in January, I suspect that we’ll generally use machine learning to help us execute tasks better, not to replace human workers entirely. AI can help us to do more in the same amount of time, whether that’s to analyze an MRI image, compose a sonnet or make sense of marketing data. AI may cost some people their jobs, but I doubt many of them will be in politics. Our roles will instead change to fit our new productive capacity.
Crunching Data
Actually, you may be using machine learning today without realizing it, since digital advertising platforms, data providers and other campaign technologists already employ it behind the scenes. AI can help identify niche audiences and demographic groups more precisely, for example, letting us reach more of the right voters cost-effectively. It’s also being used to track disinformation, process polling data and sort through ad-performance numbers. Even policy analysis: I recently spoke with a firm whose AI platform was trained on policy language and could produce summaries of new pieces of legislation.
Down-ballot campaigns can get in on the fun, too. Field organizers might ask a chatbot to turn a voter spreadsheet into individual walk lists for volunteers, for instance, just one of many document-management tasks an AI might do in a flash. Once you’re playing with chatbots, though, data is just the start.
Whipping Up Text
Back in January, I asked pioneering generative AI platform ChatGPT to produce fundraising emails and Facebook messages, getting results that were first-draft at best. Other writers, including in C&E, tried similar approaches and saw the same results. AI rewards experimentation, though, and we recently learned in the New York Times that the Democratic National Committee is including chatbot-generated messages in its email testing program, often seeing them outperform human-created variants.
I’ll note that email fundraising rewards novelty. “Hey” was once near-revolutionary, and it’s possible that AI could eventually run into the same diminishing returns that human email-writers do. To get good results, I assume DNC staff or consultants worked out ways to “train” the AI, perhaps by setting up precise queries, giving it rules or feeding it successful variants to analyze. In any case, more content to feed the beast! And by letting the Times know what they’re doing, the DNC has given every Democratic campaign permission to try it themselves.
Even without fancy optimization, chatbots can help a human start the creative process by priming the pump with first drafts or out-of-left-field suggestions. They can also serve as a force-multiplier. Someone preparing a raft of tweets around an event or a report release, for example, might ask ChatGPT to generate drafts aimed at several different audiences in an instant.
But “drafts” is the operative word. As many have pointed out, tools like ChatGPT are great at putting words together in ways that humans find meaningful, but they aren’t tethered to the actual physical world. They can make things up, including historical “facts” or the details of a person’s bio. The Times piece notes that DNC political staff review and edit every message, and anyone using AI-generated text or images in the political world had better do the same. You don’t want to make the news for sending out a press release from an alternate universe.
Failure is Always an Option
“Hallucinating” is only one of the ways AI can let us down. Chatbots only know what they’ve been trained on, and data sets based on human writing, video and imagery contain all the biases inherent in the people who created the raw material. “Racist” AIs have been with us for years, along with self-driving cars that can’t see dark skin. “Garbage In, Garbage Out”? Not exactly a new concept in computer science.
Other flaws in an AI’s perspective may be harder to spot than dead pedestrians. Chatbots and other AI platforms present a classic “black box” problem: we put in a request at one end and get a result at the other, but we have no idea what happens in between. AIs give answers, but they don’t give us a direct way to check their work. That brilliant, counter-intuitive observation a machine just whispered to you? It could bear rich fruit, or it could be dead wrong. Nervous yet?
Note that we we’ve barely mentioned disinformation! I’ve long suggested that fake video could one day start a war, though those recent Trump arrest “photos” didn’t exactly take the world by storm. As the systems learn from our requests and get better at producing what we want, though, we can expect imaginary stories to have a better chance to fool us into believing something — or buying something.
Whatever your intentions, you still need to be ready to verify an AI’s output before you trust it. If you’re going to ask an AI to analyze data, be sure to perform the same kinds of sanity checks as someone asking it to write a press release. That way you can be sure the results aren’t not just in the right universe, but also in the right ballpark. For instance, create checkpoints that compare the AI’s product with information you have from other sources to help you have confidence in the rest of its work.
Sprinkle a Little AI on That, Will You?
Who wants to build up your confidence in AI? The people who want to sell you AI. My inbox regularly features pitches from companies I’ve never heard of, offering to sell me AI-filtered sales leads and voter data. Others promise to bring massive web traffic through AI-driven content marketing. Some of them may even be legitimate!
Jokes aside, plenty of products in the political space will employ machine learning to deliver better results for campaigns and advocacy organizations, in all the ways described above and more. But some marketers will always be tempted to slap an AI label on an otherwise conventional product, turn on the hype machine and crank up the price.
Until we create an AI filter to catch inflated sales language, unfortunately, we’ll have to spot the hucksters on our own. Perhaps this piece will help, and please note that I’ve got my eye on political AI for the long term. I’d love to hear how you’re putting it to work, at least while the robots still let us roam free.
Colin Delany is founder and editor of the award-winning website Epolitics.com, author of “How to Use the Internet to Change the World – and Win Elections,” a veteran of more than twenty-five years in digital politics and a perpetual skeptic. See something interesting? Send him a pitch at cpd@epolitics.com.