Q&A: AAPC’s Julie Sweet Weighs in on the Bills and Trends to Look for in 2025
With a new presidential administration in Washington and state lawmakers working their way through their legislative sessions, the political world is bracing for big changes to AI regulation, data privacy laws and campaign messaging in 2025.
Julie Sweet, the American Association of Political Consultants’ director of advocacy and industry relations, sat down with C&E to explain some of her big concerns and priorities this year.
This interview has been edited for length and clarity.
C&E: I just really want to start off with the big picture. There’s a new administration in DC. There are four active members of the FEC right now. States, including Texas, are in their legislative sessions. What are you looking for this year? What are you telling political professionals to look for this year?
Julie Sweet: The three main areas that I’m paying attention to – that I think are going to have significant impact both on strategy, tactics and compliance – are in the areas of AI, data privacy and political messaging, which we generally talk about in terms of SMS texting and other calling tools.
There is a complex, to put it mildly, and uncertain regulatory environment. There’s not a ton of federal action that we see or expect to see, particularly around AI. There have been a few fits and starts around data privacy, in terms of national policy. And we do have the [Telephone Consumer Protection Act] when it comes to SMS.
But we’re seeing states kind of take the lead in the first two, and then expanding on TCPA requirements…I think we have 19 states that have state rules around AI use and/or labeling. We have several states that have comprehensive data privacy rules and broker registration requirements. And what this does is just really hard from a compliance perspective, because say you work in multiple states, then you’ve got to comply with a bunch of different variations in data privacy requirements and/or AI. What we’re looking for, really, is transparency and predictability, not only from regulators and legislators but also from the platforms. The platforms have wildly divergent kinds of policies, and then also the developers themselves.
We have a few examples that say no political use case whatsoever. Open AI is a good example. What does that mean and how does that get enforced and implemented? What gets flagged? What doesn’t get flagged? So, we’re really trying to get clarity and predictability across the states and the platforms. and this is crucial, not just from an operational compliance perspective, but also strategy and tactics and how they can use these tools and deploy them.
And then the third thing is: What are the ethical practices and how do we balance those ethical standards with free speech and innovation? The AAPC strongly opposes the use of deceptive AI-generated content. But we have to be precise to prevent overreach that stifles legitimate campaign speech and responsible AI use. I think there is oftentimes a conflation between AI-generated content and deep fakes. And while we need AI for deepfakes – because ‘deep’ refers to deep learning – you can also make pretty compelling deceptive media from Photoshop and other digital tools. And so you have to be both regulators and, from the ethical perspective, need to be really concise in what it is that is prohibited. and make sure that we don’t draw in everything else that might be a digital tool, which is some of what we’re seeing in the states.”
C&E: I want to dig into the AI issue a little bit more. There’s a lot of legislation this year and I think Texas probably stands out as a big one…Are there any states in particular or any bills in particular that you’re looking at and talking to industry professionals about?
Sweet: You’ve got Nevada as another example, where the secretary of state is a big proponent of their (AI-generated content) labeling bill. It would also include putting the particular content on file or on record that is accessible to the public. I think we’re monitoring over a dozen bills now. We have 19 states that have laws already on the books, most of which passed in 2024, and just a handful – California, Texas, Minnesota, I think – had them beforehand.
But I think thematically what we look for are the ones that require disclaimers or some form of labeling. And they vary widely in the language that is required. So, some say ‘any use of synthetic media’, some say ‘manipulated by AI’, some say ‘generated by AI’. And what we know from the research…is that the language really does matter in terms of how people receive the information. And so we want legislators to be really thoughtful about the disclaimers and also just real estate on the content. Right? If you require a certain size, then that takes up a significant percentage. There are some of the audio disclaimers that require almost a four-second disclaimer that would be appended to the stand-by-your-ad disclaimer. So that’s eight seconds of a 30 second ad, right? That’s problematic.
Also, we’re really concerned about the penalties. Private rights of action and who can sue as a result of it is something that we watch really closely, as well as fines. I think I’ll have to go back and find the state for you, but I think one state [says] that if you are found to have used a deep fake ad, you can no longer lobby the legislature and you get kicked off the ballot, which is a pretty steep penalty.
The other big thing that we pay attention to is how they define ‘deepfake’, how they define ‘synthetic media’ and how they define ‘digital tools’, because there are a number of these bills and a number of laws that are currently on the books that it will say ‘any digital tool’. And so does that include Photoshop? So we really try to look at the definition. The goal here really is to kind of standardize. But part of the challenge is that so few of these bills have been tested through litigation and enforcement, so I think that creates a lot of uncertainty amongst practitioners. How are these going to be enforced? How aggressive are the [state attorneys general] going to be?
C&E: You touched on this a little bit earlier. The AAPC has been very clear that it doesn’t support the deceptive use of AI. It doesn’t support misinformation. But this is still a relatively new technology. Is there any concern some of these laws could stifle creativity and innovation and basic free speech rights? How do you see the industry balancing that right now?
Sweet: I think that AAPC members are responsible actors. There are so many ways to deploy AI. Having this kind of conflating of any AI content or AI tool with a deepfake requires a lot of public education, which is something the AAPC has dedicated itself to. But again, it goes back to that legislative and statutory language…Is it trying to get at the deep fake or is it just trying to regulate the tool? And when you seek to regulate the tool, it’s kind of like trying to regulate the printing press, right? So that’s what we really want to look at. How do we get to a system of transparency and accountability for the content that is being created, [but] does not create this patchwork of compliance?
How is it that we can ensure that the good actors, the responsible actors are empowered to use these tools and to innovate? I have said on a few different occasions: I think the challenge is, if the industry is not able to innovate because of these compliance challenges and not understanding how the regulatory landscape is going to play out, that that becomes a disincentive to adopt these tools. And if the folks who are not the responsible actors – the ones who are not going to label their content, the folks that are wanting to drive misinformation [or] disinformation – are using these tools, then the good actors are immediately put at a disadvantage. It won’t be as efficient. We won’t be able to be as persuasive, right? It won’t be compelling content, and we won’t be able to personalize it in the same ways that the bad actors can. And remember, the courts have been pretty clear that you can’t regulate this content. I think that’s something that we’re going to see more and more of. Are these restrictions on content? We need to empower the counterspeech and ensure that the counterspeech can be as effective as the initial misinformation speech.
C&E: I want to turn to social media here for a second, because I think one big development we saw this year was Meta changing some of its content moderation policies. Are there any other platforms that you’re watching or that the industry should be watching, where we could see some changes being made this year?
Sweet: I think it’s too early to tell. It is going to continue to shift and change as folks adapt, not only to new administrations and a new Congress and a new federal regulatory environment. The Trump administration has made very clear that they don’t want any regulation or hindrance on AI tools. But I think that what I’m most concerned about when it comes to Meta and other content policies is how do those policies work themselves out in practice? And are they being explained with transparency and consistency and stability?
One of the challenges, for example, is when Facebook just decides to turn off paid advertising a week before the election, regardless of the actual consequences. Does the policy that they’re implementing actually serve the goal that they are hoping to serve? Because again, if you turn off paid advertising a week out from the election, then that kind of leaves organic to its own little universe without the ability to respond in an effective way.
I think the thing that we know – the thing that we can know – is that it’s going to constantly change.