New Delhi: Deepfake technology is going to be a serious menace in the society and the antidote for Artificial Intelligence (AI) would be technology only, the Delhi High Court on Wednesday observed.
The high court was hearing two petitions against non-regulation on deepfake technology in the country.
Deepfake technology facilitates creation of realistic videos, audio recordings and images that can manipulate and mislead the viewers by superimposing the likeness of one person onto another, altering their words and actions, thereby presenting a false narrative or spreading misinformation.
“You (Central government) will have to start working on this. You also start thinking about this. It (deepfake) is going to be a serious menace in the society,” said Acting Chief Justice Manmohan and Justice Tushar Rao Gedela.
Justice Manmohan further said, “You also do some study. It is like what you are seeing and what you are hearing, you can’t believe it. That is something which shocks. What I see through my own eyes and what I have heard through my own ears, I don’t have to trust that, this is very very shocking.” One plea has been filed by journalist Rajat Sharma against non-regulation of deepfake technology in the country and seeking directions to block public access to applications and software enabling creation of such content.
The other petition has been filed by Chaitanya Rohilla, a lawyer, against deepfakes and the unregulated use of artificial intelligence.
The court granted two weeks time to the petitioners to file an additional affidavit containing their suggestions and listed the matter for further hearing on October 24.
During the hearing, the bench observed that before the elections, the government was agitated on the issue and now things have changed.
To this, Additional Solicitor General Chetan Sharma, appearing for the Centre, said certainly it was a malice and “our body language might have changed but we are still agitated as much we were then”.
The Centre’s counsel also said the authorities recognise that it is a problem which needs to be dealt with.
“We can employ counter AI technology to annul what would otherwise be a very damaging situation. To deal with the issues, four things needed – detection, prevention, grievance support mechanism and raising awareness. No amount of laws or advisories will go a long distance,” Sharma contended.
To this, the bench responded that the antidote for AI would be technology only.
“Understand the damage that will be done by this technology because you are the government. We as an institution would have certain limitations,” it said.
The high court had earlier asked the Central government, through the Ministry of Electronics and Information Technology, to file its response to the two petitions.
Rajat Sharma, the Chairman and Editor-in-Chief of Independent News Service Private Limited (INDIA TV), has said in the public interest litigation (PIL) that proliferation of deepfake technology poses significant threat to various aspects of society, including misinformation and disinformation campaign, and undermines the integrity of public discourse and the democratic process.
The PIL said there is a threat of potential use of this technology in fraud, identity theft and blackmail, harm to individual reputation, privacy and security, erosion of trust in media and public institutions and violation of intellectual property rights and privacy rights.
It said it is imperative for the government to establish regulatory frameworks to define and classify deepfakes and AI-generated content and prohibit the creation, distribution and dissemination of deepfakes for malicious purposes.
The plea said the Centre had made a statement of its intent to formulate regulation for dealing with deepfakes and synthetic content in November 2023, but nothing of the sort has seen the light of the day so far.
The petitioner mentioned in the plea that certain unscrupulous people were maintaining social media accounts and uploading fake videos featuring his image and his AI-generated voice to sell or endorse various products such as purported medication for diabetes and fat loss.
The PIL sought a direction to the Centre to identify and block public access to the applications, software, platforms and websites enabling the creation of deepfakes.
The plea said that the government be asked to issue a directive to all social media intermediaries to initiate immediate action to take down deepfakes upon receipt of a complaint from the person concerned.