The Power of AI is New, But It Raises Age Old Ethical Questions

Like every contributor to the Campaigns & Elections 45 Special Edition, I have been in politics for a long time. My first campaign was the 1977 reelection of Frank Logue, mayor of my hometown of New Haven, Conn. I was 12 years old. I was tasked, primarily, with stuffing envelopes, fetching lunch from Clark’s Dairy and hand delivering press releases on my bicycle.
Between then and now, I persuaded the Maricopa County Democratic Party in Arizona to adopt email, promoted the 1-800 number for Jerry Brown’s 1992 presidential campaign, put the first member of Congress on the internet (Rep. Sam Coppersmith, D-Ariz. on a gopher server) and coordinated the first AOL chat with a federal elected official.
Some of these tools are still with us. Others are irrelevant. Some of the cutting-edge skills my students at the School of Media and Public Affairs at the George Washington University are learning in classes and at their internships will be with them throughout their careers. Others will be quaint within the decade.
The latest addition to this list of new tools is generative artificial intelligence. With generative AI, campaigns can produce material faster than ever before and spread campaign messages more widely than ever before. One result is that tools like ChatGPT are focusing attention on the ethics of its use in political communication and campaigns.
It can be tempting to believe that AI changes everything, and so everything must change for AI. But the nefarious uses of AI aren’t nefarious because they use AI. They’re nefarious because they’re nefarious. AI-generated fake images or news are wrong because they’re fake – not because they were generated by AI.
The speed and scope of AI is like nothing we’ve seen before. But the basic questions about what is okay and not okay to do are as old as politics itself.
The world’s first political consultants were arguably Greek sophists like Protagoras, Phaedrus and Gorgias in the fourth century BCE. Plato spent a lot of time berating them for relying on emotions to persuade and for teaching people to make things sound good regardless of whether or not they were true.
The list of fake images, made-up news and shady uses of technology to advance political causes has been growing since. The number of critics has grown at the same pace, from the Roman philosopher Quintilian complaining about “hack advocates” in about 95, to George Orwell complaining about political language that “gives the appearance of solidity to pure wind” in 1946, right down to the present.
One area in which AI is potentially different is that the person making up and spreading the nonsense is largely removed from the conversation. The painter Albert Bierstadt faked some parts of his landscape paintings that helped shape U.S. policy in the American West in the 1800s — deepfakes in oil. But at least there was a person doing the painting. Ben Franklin’s fake news about the British designed to sway European opinion toward the colonies at the end of the American Revolution was at least written by Franklin. Someone did something — even if it was less than noble.
AI risks taking the people out of the process. The work of AI is not done by the person using the tool. The work is done by the person who made the tool. Campaign professionals give AI directions and AI generates results based on what its programmers wrote.
Responsible campaign professionals will use AI the same way they use consultants. AI is one more way for campaigns to efficiently generate ideas and promote campaign messages. Responsible consultants will ensure that the final speech, ad, mail and so on is the product of an actual person. Irresponsible consultants will drop Venmo quarters into AI gumball machines and pitch whatever comes out to voters.
When the Greek philosopher Aristotle said that “man is a political animal,” he wasn’t talking about yard signs and robocalls. He meant that being an active part of a community was critical to human flourishing. This makes politics the most human thing one can do. That AI removes people from politics, or at least puts them at arm’s — or keyboard length — raises a set of ethical questions we have barely begun to consider.
Peter Loge is the director of the Project on Ethics in Political Communication at the George Washington University. Prior to GWU he spent more than three decades working in politics. This piece appears in the commemorative 45th Anniversary print edition of Campaigns & Elections magazine.