ADVERTISEMENT
Deciphering ‘deepfakes’Deepfake videos of political leaders and informational campaigns are on the rise, eroding public trust through misinformation and undermining political leaders, influencing voters to believe fake videos are real.
N Manoharan
Last Updated IST
<div class="paragraphs"><p>Shockingly, more than 95% of deepafake content involves pornography.</p></div>

Shockingly, more than 95% of deepafake content involves pornography.

Credit: iStock Photo

N Manoharan and Anusha G Rao

ADVERTISEMENT

The old adage “seeing is believing” is no longer accurate with the emergence of the ‘deepfake’ phenomenon across the globe. Otherwise called ‘digital replicas’, deepfakes are artificially generated, morphed content using artificial intelligence to replicate a person’s image and voice. While altering images with Photoshop is not new, the rise of generative technologies like GAN Generative Adversarial Networks (GAN) and Variational Auto-Encoders (VAE) has led to a surge in deepfakes since 2017.

India is among the top five most vulnerable countries to deepfakes. Recently, a slew of deepfake videos featuring Indian actresses and even Prime Minister Modi doing garba went viral. Deepfake videos appear increasingly realistic, created with easily available tools but difficult to detect. Shockingly, more than 95 per cent of deepafake content involves pornography. Deepfake creators seem mindful of Buckminster Fuller’s observation: “Seeing-is-believing is a blind spot in man’s vision.” 

The damage caused by deepfakes cannot be underestimated. Even a small, deepfake clip has the potential to cause immense harm to the privacy and reputation of individuals and society at large in a short time. Celebrities are often the target of deepfakes to humiliate and exploit their reputations and identities. However, common people, especially women, are victims of deepfakes, which use their pictures and videos without their knowledge or consent.

While deepfake videos target individual autonomy and dignity, their wider implications for society undermine democratic institutions. Deepfake videos of political leaders and informational campaigns are on the rise, eroding public trust through misinformation and undermining political leaders, influencing voters to believe fake videos are real. Deepfakes tend to exacerbate wider geo-political tensions, as seen in the Russia-Ukraine war and Israel-Hamas tensions.

To address this issue, several states in the US have passed laws dealing with pornographic deepfakes. Virginia amended its revenge pornography statute to impose criminal penalties on the dissemination of deepfake pornography. The California Civil Code imposes civil liability on anyone who either creates or distributes pornographic deepfakes. In October 2023, the White House released an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence to regulate the risks of AI and set standards for transparency. The order obligates the watermarking of synthetic AI content. The EU is discussing having a regulatory framework for AI that attempts to define deepfakes, provide risk classification, and establish transparency obligations. The proposed AI Act would adopt a risk-based approach, classify deepfakes as “limited risk,” and accordingly provide minimum transparency obligations to disclose that the content is generated through automated means, subject to exceptions for legitimate purposes, like, satire.

India currently lacks laws regulating emerging technologies or the creation and dissemination of deepfakes. Section 66D and Section 66E of the IT Act criminalise impersonation and intentionally publishing and transmitting objectionable and obscene content, respectively. Additionally, Rule 3(1)(b)(ii) of the IT Rules obligates social media intermediaries against hosting, publishing, and transmitting obscene or pornographic information that is invasive of another’s bodily privacy. Complaints received by the platforms shall be expeditiously acted upon within 72 hours of such reporting. Further, platforms must remove or disable access to such information within 36 hours upon receipt of a court order or upon being notified by the appropriate government authorities. The IT Ministry reiterated the same in an advisory issued in this regard to social media platforms and internet intermediaries on December 26, 2023. However, while online platforms are obliged to act on complaints and take down illegal content, there is a need for specific liability assignments—either civil or criminal—to creators and distributors of deepfakes. The issue is that the legal system has not kept pace with technological advancement.

The Ministry of Electronics and Information Technology has identified a four-pronged approach to addressing deepfakes: detection, prevention, reporting, and awareness. However, combating deepfake menace requires political, social, technological, and legal interventions. Proactive steps need more emphasis than countermeasures, emphasising prevention over cure. All the stakeholders must make efforts to begin a dialogue on regulation by looking into the systemic risks, transparency, and rights of users. At the same time, given the transborder and transnational nature of the issue, cooperation at the global level is critical. Being one of the global IT powers, India should take the lead in this regard.

(Manoharan is director, Centre for East Asian
Studies, Christ (Deemed to be) University, and Anusha Rao is a Bengaluru-based lawyer)

ADVERTISEMENT
(Published 02 January 2024, 02:18 IST)