The recent incident involving controversial responses from an AI platform about Prime Minister Narendra Modi highlights a critical junction in AI governance. More concerning was that the same platform provided well-moderated responses for similar queries about other global leaders. The announcement by the Minister of State for IT & Electronics, Rajeev Chandrasekhar, requiring AI platforms to obtain a permit to operate within India, points to a growing concern over the unchecked proliferation of AI technologies. The manner of the announcement, the later clarification notwithstanding, caused a lot of concern in technology circles. This incident points to the broader, global challenge of ensuring that AI operates within ethical and legal constraints while continuing to innovate and that the government’s approach must be strategic rather than reactionary.
The European Union has responded to similar challenges by introducing the AI Act, the first comprehensive AI law globally. It focuses on high-risk AI applications, particularly in sectors like education, healthcare, and policing, and mandates new standards for these applications. For example, specific uses of AI, like creating facial recognition databases or using emotion recognition technology in workplaces or schools, are banned. This act calls for greater transparency in AI model development and holds organisations accountable for any harm resulting from high-risk AI systems.
Japan is navigating its course in this domain. The Copyright Act of Japan allows the use of copyrighted works for information analysis without permission from creators, provided the service is limited to the minimum necessary and does not unreasonably harm creators’ interests. Yet, unresolved issues remain regarding the scope of these exceptions and the definition of “unreasonable harm,” especially in AI.
Meanwhile, AI transparency has become a focal point in the United States, with states like Pennsylvania introducing legislation to ensure transparency in AI algorithm use in insurance claim processing. Furthermore, the US is witnessing significant legal activities, such as class action lawsuits against major insurers, over AI algorithm use. The Biden administration has also taken steps through an Executive Order to outline actions ensuring AI’s safe, secure, and trustworthy development across various sectors.
This global overview provides a context for Rajeev Chandrashekar’s recent comments. It highlights the need for a strategic, responsible approach to AI, including developing mechanisms for accountability, addressing legal and ethical concerns, and maintaining a balance between innovation and regulation. The Government of India is working to create a comprehensive global regulatory framework for AI with a pro-growth, pro-jobs, and pro-safety stance. This framework is expected to be released in the June-July timeframe.
India’s stance on AI development will have profound implications as a nation fuelling digital innovation with its talent pool. Companies at the forefront of AI deployment must address the immediate concerns and anticipate the long-term impact of their AI platforms on society. They must implement robust guardrails to ensure the output is safe for work, culturally sensitive, and politically neutral. It is also critical for users to have at least a high-level understanding of AI’s capabilities and, more importantly, its limitations.
AI models today digest vast amounts of information, combining it into what experts call “latent spaces.” Latent spaces are the engine rooms of AI, thriving on impreciseness and flexibility. Understanding that these models are not designed to function like traditional look-up databases is essential. Expecting them to deliver precise answers or politically correct opinions without specific guidelines is a misunderstanding of their capabilities. To address these challenges, many AI companies have instituted guardrails. These mechanisms work outside the core AI system, ensuring that the output is generally safe for work by not providing certain opinions or accepting specific queries.
For policymakers in India and beyond, the task is to create an environment where AI can thrive without compromising ethical standards or societal values. This requires a nuanced approach that includes flexible regulations, public awareness initiatives, and a commitment to fostering innovation that benefits all sections of society. Responsible deployment of AI is as much an art as a business necessity. Businesses should push the envelope of innovation while also taking the lead in ethical considerations, especially at a time when the full impact of AI is not yet adequately grasped.