(Reuters) — In a viral AI-generated video, an ecstatic Narendra Modi, clad in a trendy jacket and trousers, grooves to a Bollywood song on stage as the crowd cheers.
The Indian Prime Minister reshared the video on X, commenting, “Such creativity in peak poll season is truly a delight.”
Another AI video, featuring Modi’s rival Mamata Banerjee dancing in a saree-like outfit, overlays parts of her speech criticizing those who defected to Modi’s party. State police have launched an investigation, stating that the video could “affect law and order.”
These contrasting reactions to AI-created videos highlight growing concerns about the misuse of technology, especially during India’s massive general elections. The ease of creating highly realistic AI videos can mislead even tech-savvy individuals. The risk is higher in India, where many of the 1.4 billion population are less technologically literate, and manipulated content can easily incite sectarian tensions.
A January survey by the World Economic Forum found that misinformation poses a greater risk to India than infectious diseases or illicit economic activity over the next two years. New Delhi-based consultant Sagar Vishnoi, advising political parties on AI use, warned that misinformation can spread at unprecedented speeds with AI, posing severe risks, especially to the elderly who are often not tech-savvy.
The 2024 national election, spanning six weeks and ending on June 1, is the first where AI has been prominently used. Initially, politicians utilized AI for personalized campaign videos and audio. However, major misuse cases emerged in April, including deepfakes of Bollywood actors criticizing Modi and fake clips involving his top aides, leading to nine arrests.
The Election Commission of India recently warned political parties against using AI to spread misinformation, outlining seven provisions of IT and other laws with penalties including up to three years in jail for offenses such as forgery and promoting enmity.
A senior national security official expressed concerns about fake news potentially leading to unrest, noting the difficulty in monitoring and countering the rapidly evolving AI environment. An election official admitted the challenge of fully monitoring social media content.
AI and deepfakes are being used increasingly in elections worldwide, including in the US, Pakistan, and Indonesia. The latest incidents in India underscore the difficulties faced by authorities.
For years, an Indian IT ministry panel has had the authority to block harmful content. During this election, officials are working to detect and remove problematic content. While Modi reacted lightheartedly to his AI dancing video, saying, “I also enjoyed seeing myself dance,” Kolkata police initiated an investigation against an X user for sharing the Banerjee video.
Kolkata cyber crime officer Dulal Saha Roy issued a notice on X demanding the video be deleted or face strict penal action. The user, SoldierSaffron7, refused to comply, asserting they couldn’t be traced.
Election officials noted their limited power to enforce content removal, often relying on social media platforms’ internal policies.
The viral Modi and Banerjee dancing videos, with 30 million and 1.1 million views respectively, were created using Viggle, a free website that generates videos from photographs and basic prompts. Another controversial Viggle video shows Banerjee in an altered scene from “The Dark Knight,” appearing to blow up a hospital, which has garnered 420,000 views.
West Bengal police believe this video violates Indian IT laws, but X has not taken action, defending users’ voices. The user dismissed the notice, believing no action could be taken against them.