As the industry awaits proper guidance for where the compliance boundaries lie when it comes to generative AI — the FEC, for instance, is actively mulling new AI regulations — practitioners themselves are going to have to decide what’s kosher when it comes to these tools.
With that in mind, here’s what three top creative practitioners had to say about navigating a road without ethical markers:
Emily Karrs, creative director at IMGE: I think one of the reasons why you talk about this so much in the political sphere is because we are in the trust business. That is fundamentally what we’re doing when we’re running a campaign. We are asking people for their trust and what is the number one thing that people will say that’s a negative hit on a candidate? It’s that they’re fake. So anytime you’re using something artificial in a trust business, there’s going to be some tension there.
One thing that I have noticed as I’ve researched a number of different A.I. tools is [their terms of service] have a lot of things that are specifically prohibited, and one of them is mass market communications. The baseline that I come back to is, ‘Is it going to increase the trust of the voters or is it going to decrease it if we let them in on the joke?’ If they know that this is an image generated with the candidate riding a velociraptor with an American flag, then they love it. [But what about a campaign posting a video of the] candidate speaking Spanish as though they actually know how to speak Spanish? Is this increasing or decreasing trust? And from there, that’s what we talk about the ethics.
Hillary Lehr, CEO Quiller: I think for anything that is visual or audio that is produced having the consent of the candidate is important. We don’t recommend anybody build a program around ChatGPT for content generation for political use cases. One, because you don’t want it to get shut down in the middle of crunch time.
And secondly, because the technology is not trained specifically on your use case — your voice, your party, your talking points and the privacy permissions are not what they should be for people to be uploading a lot of content about their clients or candidates. And we saw this as an opportunity where we shouldn’t be relying on the large generative AI platforms for creating email, fundraising content or other types of email content.
But as operatives in this space with a large election coming up, we also should be taking advantage of these new technologies if we’re able to build them in a way that meets our values and needs. And so that was really the opportunity that opened up for Quiller that we’ve really jumped on.
But in trying to do that, we’ve also been working at engaging with the community to understand what those rules of the road are and what privacy protections and use cases and training modules — to remove bias, for example — that need to be in place for these tools to actually help and support campaigns and teams.
Why has this really exploded when we’ve been creating copy for candidates, when we’ve been altering images for a long time? I wonder if perhaps we’re balking at the scale and realism that’s potentially there with the new technologies. That’s been a much more time intensive process in the past.
But even if we draw an ethical line, and together as campaign professionals across the aisle reach consensus on what proper disclosures should be (what consent should be?) it’s not necessarily going to be honored by all actors across the spectrum.
Bernadette Herrera, senior graphic designer at Trilogy Interactive: One thing that comes to mind, for example, like creating an AI voice or voiceprint for a candidate. That puts their voice to something that they wrote in the past that doesn’t have audio. I would consider that very unethical. It’s kind of like fabricating evidence and … probably illegal, but I don’t know. I think we should seek legal counsel.
I think that for the people who do this first, it’s going to be important to have that level of trust and buy-in and let [voters] in on the joke [of a fake video]. Otherwise, you’re going to get a news headline that says ‘Candidate X sent out a bunch of fake videos. People are angry.’ We want to make sure that the message that you were transmitting is the message that people end up hearing.