People gather to hear former president Donald Trump speak in September 2022 in Wilkes-Barre, Pennsylvania
People gather to hear former president Donald Trump speak in September 2022 in Wilkes-Barre, Pennsylvania AFP

A fake image of Donald Trump's arrest. A dystopian video of a dark future in the event of Joe Biden's reelection. An audio deepfake of both men slinging insults. Fast-evolving AI technology could turbocharge misinformation in US political campaigns, observers say.

The 2024 presidential race is expected to be the first American election that will see the widespread use of advanced tools powered by artificial intelligence that have increasingly blurred the boundaries between fact and fiction.

Campaigns on both sides of the political divide are likely to harness this technology -- which is cheap, easily accessible and whose advances have vastly outpaced regulatory responses -- for voter outreach and to churn out fundraising newsletters within seconds.

But technologists also warn of bad actors exploiting AI to sow chaos at a moment when the political climate is already hyperpolarized in the United States and many voters dispute verified facts including that Trump lost the 2020 election.

In a sobering bellwether of what may become widespread ahead of the 2024 race, fake images of Trump being hauled away by New York police officers -- created by an AI art generator -- went viral in March.

Last month, in response to Biden's announcement that he will run for reelection in 2024, the Republican National Committee almost instantly released a video made of AI-produced images of a dystopian future if he wins.

It showed photo-realistic images of panic on Wall Street, China invading Taiwan, immigrants overrunning border agents, and a military takeover of San Francisco amid dire crime.

And earlier this year, a lifelike but utterly fake AI audio of Biden and Trump -- expected to square off next year in a rematch of the 2020 election -- hurling insults at each other made the rounds on TikTok.

"The impact of AI will reflect the values of those using it -- bad actors in particular have new tools to supercharge their efforts to fuel hate and suspicion, or to falsify images, sound, or video in an effort to bamboozle the press and public," Joe Rospars, founder of left-leaning political consultancy Blue State, told AFP.

"Combating those efforts will require vigilance by the media and tech companies, and by voters themselves," added Rospars.

The efficiency of AI is obvious, no matter a user's intentions.

When AFP directed ChatGPT to create a campaign newsletter in favor of Trump, feeding it the former president's false statements debunked by US fact-checkers, it produced -- within seconds -- a slick campaign document with those falsehoods.

When AFP further prompted the chatbot to make the newsletter "angrier," it regurgitated those falsehoods in a more apocalyptic tone.

"The current level of AI lies a lot," Dan Woods, the former chief technology officer for Biden's 2020 campaign, told AFP.

"If our foreign adversaries just need to convince an already hallucinating robot to spread misinformation, well then we should be prepared for a much bigger misinformation campaign than we saw in 2016."

At the same time, AI advancements will become a "game changing" tool for understanding voters, said Vance Reavie, chief executive of Junction AI.

"There is a large portion of the population who simply don't vote at all or vote irregularly," Reavie told AFP.

"With AI we can learn about what these potential voters care about and why at a very granular level, and from this we can understand how to engage them and what policies will motivate them."

Campaign staff previously spent hours on voter outreach with speech-writing and prepping talking points, tweets and questionnaires, but AI has made the same job possible within a fraction of that time.

"Creating content is time-consuming and costly, now imagine being able to do 10 times as much with no extra staff effort," said Reavie.

"There will also be a lot of generated content that will be false... It will be fast and easy to flood channels with content and difficult for the average person to know otherwise."

The ability of Americans to agree on objective truths will also be challenged, with a huge swath of the US population already deeply distrustful of establishment media.

"The concern is that as it becomes easier to manipulate media, it will become easier to deny reality," Hany Farid, a professor at the UC Berkeley School of Information, told AFP.

"If, for example, a presidential candidate says something inappropriate or illegal, he or she can simply claim the recording is fake. This is particularly dangerous."

Betsy Hoover, a partner at Higher Ground Labs told AFP her company was developing an AI project called "Quiller" designed to write, send and evaluate the effectiveness of fundraising emails.

"Bad actors will use whatever tools at their disposal to accomplish their goals -- and AI is no exception," Hoover, the former digital director for Barack Obama's 2012 campaign, told AFP.

"But I don't think we can let this fear keep us from using AI to our advantage."