As the February 8 elections approach in Pakistan, concerns arise over the use of deepfake technology, with former Prime Minister Imran Khan employing an AI-generated image and voice clone for an online rally in December.
This alarming trend is not unique to Pakistan, as similar issues loom over upcoming elections in India, Indonesia, Bangladesh, and beyond.
Imran Khan, currently imprisoned on an official secrets act case, utilized deepfake technology to address an online rally, drawing over 1.4 million views on YouTube. While Pakistan has drafted an AI law, critics, including digital rights activists, argue that it lacks sufficient safeguards against disinformation, especially concerning vulnerable communities such as women.
India is grappling with the proliferation of deepfakes, as a local election in Rajasthan and a national election by May create a significant demand for such content. The absence of guidelines for deepfakes raises concerns about their potential influence on voting behavior. Even prominent figures like Indian Prime Minister Narendra Modi acknowledge the threat posed by deepfake videos, prompting authorities to warn social media platforms about potential loss of safe-harbor status.
In Indonesia, where over 200 million voters are set to cast their ballots on February 14, deepfakes featuring presidential candidates are circulating online, raising fears about their impact on influencing voter perceptions and behavior. Bangladesh has also witnessed the emergence of deepfake videos targeting female opposition politicians, posing challenges in debunking misleading content due to low levels of digital literacy.
The global surge in deepfake usage extends beyond South Asia, with generative AI tools producing synthetic media ahead of elections worldwide. Concerns about the rapid and cost-effective creation of disinformation through AI have been highlighted by organizations like Freedom House. Major social media platforms, including Meta and Google, are attempting to address the issue by implementing content disclosure and removal measures.
However, the struggle to combat deepfakes persists, with at least 500,000 video and voice deepfakes shared globally in 2023. Platforms face challenges in keeping pace with the evolving technology, raising questions about their effectiveness in preventing the spread of AI-generated disinformation. As countries like India, Indonesia, and Bangladesh pass laws to regulate online content, concerns are raised about the responsiveness and proactivity of platforms in handling these challenges during election cycles.
In India, where Prime Minister Modi is anticipated to secure a third term, the use of deepfake technology is expected to escalate during the upcoming general election. Despite concerns, some creators plan to add watermarks to AI-generated content to indicate its synthetic nature, aiming to mitigate potential misunderstandings. The increasing prevalence of deepfakes underscores the need for global attention and efforts to establish standards and countermeasures to safeguard democratic processes.