RTSRI24
Robots with portraits of then-Republican presidential nominee Donald Trump and Democratic presidential nominee Hillary Clinton are seen before the presidential debate at Washington University in St. Louis on Oct. 9, 2016. REUTERS

There has never been a better time to be a politician. But it’s an even better time to be a machine learning engineer working for a politician.

Throughout modern history, political candidates have had only a limited number of tools to take the temperature of the electorate. More often than not, they’ve had to rely on instinct rather than insight when running for office.

Now big data can be used to maximise the effectiveness of a campaign. The next level will be using artificial intelligence in election campaigns and political life.

Machine learning systems are based on statistical techniques that can automatically identify patterns in data. These systems can already predict which US congressional bills will pass by making algorithmic assessments of the text of the bill as well as other variables such as how many sponsors it has and even the time of year it is being presented to congress.

Machine intelligence is also now being carefully deployed in election campaigns to engage voters and help them be more informed about key political issues.

This of course raises ethical questions. There is evidence, for example, to suggest that AI-powered technologies were used to manipulate citizens in Donald Trump’s 2016 election campaign. Some even claim these tools were decisive in the outcome of the vote.

And it remains unclear what role AI played in campaigning ahead of the Brexit referendum in the UK.

Did you vote because of AI?

Artificial intelligence can be used to manipulate individual voters. During the 2016 US presidential election, the data science firm Cambridge Analytica rolled out an extensive advertising campaign to target persuadable voters based on their individual psychology.

This highly sophisticated micro-targeting operation relied on big data and machine learning to influence people’s emotions. Different voters received different messages based on predictions about their susceptibility to different arguments. The paranoid received ads with messages based around fear. People with a conservative predisposition received ads with arguments based on tradition and community.

This was enabled by the availability of real-time data on voters, from their behavior on social media to their consumption patterns and relationships. Their internet footprints were being used to build unique behavioral and psychographic profiles.

The problem with this approach is not the technology itself but the fact that the campaigning is covert and because of the insincerity of the political messages being sent out. A candidate with flexible campaign promises like Trump is particularly well-suited to this tactic. Every voter can be sent a tailored message that emphasizes a different side of a particular argument. Each voter gets a different Trump. The key is simply to find the right emotional triggers to spur each person into action.

Attack of the bots

We already know that AI can be used to manipulate public opinion. Massive swarms of political bots were used in the 2017 general election in the UK to spread misinformation and fake news on social media. The same happened during the US presidential election in 2016 and several other key political elections around the world.

These bots are autonomous accounts that are programmed to aggressively spread one-sided political messages to manufacture the illusion of public support. This is an increasingly widespread tactic that attempts to shape public discourse and distort political sentiment.

Typically disguised as ordinary human accounts, bots spread misinformation and contribute to an acrimonious political climate on sites like Twitter and Facebook. They can be used to highlight negative social media messages about a candidate to a demographic group more likely to vote for them, the idea being to discourage them from turning out on election day.

In the 2016 election, pro-Trump bots even infiltrated Twitter hashtags and Facebook pages used by Hillary Clinton supporters to spread automated content.

Bots were also deployed at a crucial point in the 2017 French presidential election, throwing out a deluge of leaked emails from candidate Emmanuel Macron’s campaign team on Facebook and Twitter. The information dump also contained what Macron says was false information about his financial dealings. The aim of #MacronLeaks was to build a narrative that Macron was a fraud and a hypocrite – a common tactic used by bots to push trending topics and dominate social feeds.

Using AI for good

It is easy to blame AI technology for the world’s wrongs (and for lost elections) but the underlying technology itself is not inherently harmful. The algorithmic tools that are used to mislead, misinform and confuse could equally be repurposed to support democracy.

AI can be used to run better campaigns in an ethical and legitimate way. We can, for example, program political bots to step in when people share articles that contain known misinformation. They could issue a warning that the information is suspect and explain why. This could help to debunk known falsehoods, like the infamous article that falsely claimed the pope had endorsed Trump.

We can use AI to better listen to what people have to say and make sure their voices are being clearly heard by their elected representatives. Based on these insights, we can deploy micro-targeting campaigns that help to educate voters on a variety of political issues to help them make up their own mind.

People are often overwhelmed by political information in TV debates and newspapers. AI can help them discover the political positions of each candidate based on what they care about most. For example, if a person is interested in environment policy, an AI targeting tool could be used to help them find out what each party has to say about the environment. Crucially, personalized political ads must serve their voters and help them be more informed, rather than undermine their interests.

The use of AI techniques in politics is not going away anytime soon. It is simply too valuable to politicians and their campaigns. However, they should commit to using AI ethically and judiciously to ensure that their attempts to sway voters do not end up undermining democracy.

Vyacheslav W Polonski, Researcher, University of Oxford

This article was originally published on The Conversation. Read the original article.

conversation logo
The Conversation's logo. The Conversation