Artificial intelligence (AI) is evolving rapidly, both opening up new opportunities and bringing with it the threat of misinformation. Its use in the political sphere raises serious concerns about preserving the integrity and transparency of elections.
Deep Fakes: the threat of disinformation
One of the most worrying threats is deep fakes – technologies that enable the creation of fake video and audio recordings using AI. Deep fakes featuring Prime Minister Rishi Sunak have been reported in the UK in 2024. More than 100 advertisements featuring his fake image appeared on social media and around 40 per cent of voters could not distinguish them from the original.
Another example of disinformation has happened in Bangladesh during the 2024 elections. Pro-government media used AI to create fake news stories accusing foreign diplomats of interfering in the elections. It was proved that these stories were created using HeyGen, an affordable AI tool costing as little as $24 per month. It demonstrates how easy and cost-efficient it is to produce misinformation.
Voter data manipulation
Cambridge Analytica is another example of using AI to interfere in elections. In 2016, it used deep data analytics technology to develop strategic communications during US election campaigns. The company collected data on social media users, compiled psychological profiles of them and developed personalised advertising that could influence the outcome of the election.
Threats posed by the use of AI are becoming particularly relevant as 19 countries, including Canada, Germany and Norway, are planning to hold elections in 2025. Coordinated efforts by the international community are needed to develop and implement measures to prevent the use of AI to undermine democratic processes and ensure the integrity of elections.