ADVERTISEMENT
Taming AI’s wild runWe need a global consensus on how to tackle risks and address ethical concerns
R M Ranganath
Last Updated IST
<div class="paragraphs"><p>Representative image showing AI.</p></div>

Representative image showing AI.

Credit: iStock Photo

Eleven frontier technologies have been identified by the United Nations as drivers of the ongoing Industrial Revolution 4.0 (Industry 4), viz., artificial intelligence (AI), IoT, block chain, machine learning, automation, robotics, 3D printing, 5/6G, drones, nanotechnology, and gene editing. However, the ethical and socioeconomic challenges posed by AI are significant, as its potential benefits could transform the present social order and create machines that outperform human cognitive skills. Such machines threaten human identity, presenting social, moral, and ethical dilemmas. 

ADVERTISEMENT

AI applications, such as Generative AI (GAI), have sparked imagination across various sectors. Tools like Duolingo and Skype, which are AI-enabled language platforms, are bridging social and cultural divides in workplaces, education, and daily activities.

Promoters of GAI envision transformative solutions to life and livelihood needs. Recognising their impact, the UGC has recently invited educational institutions to offer AI-infused courses to prepare a workforce for the emerging needs of Industry 4.0 and beyond. 

As AI technologies are poised to dominate most human activities in the foreseeable future, well-informed decisions based on collective wisdom and social responsibility are necessary for their deployment. For instance, the impact of AI on a conventionally skilled workforce is a growing social concern. AI-enabled platforms for voice-to-text and text-to-video are rendering traditional secretarial skills obsolete. According to Carolyn Frantz, corporate secretary of Microsoft, the number of new AI-based jobs would be twice as many as those that would disappear. Such arguments seem to ignore that retrenchments are a human problem in developing countries as they enlarge an already overpopulated unemployed pool. Besides, training and re-skilling a conventional workforce to perform new-generation AI-related jobs are huge tasks. 

A recent survey by Katja Grace et al. (2024) on the future of AI involving 2,778 AI researchers reveals that by 2028, AI will revolutionise financial, educational, and entertainment sectors; there’s a 50% chance that by 2047, AI-enabled machines may outperform humans in various sectors. Although the chances of AI replacing humans in all occupations are currently low, the gap may narrow rapidly, reshaping global employment scenarios. 

Many researchers have warned about the misuse and abuse of AI, including the spread of false information (deep-fake videos, messages, distress calls, and emails), authoritarian population control, inequality, and social unrest. According to Lisa Messeri & Crockett (2024, Nature), risks surfacing at the AI-society interface have to be evaluated to address ethical concerns and algorithmic bias. Jon Truby (Sustainable Development, 2020) has cautioned the users of BigTech’s AI-powered financial decision-making software and algorithm for biased outcomes that may deprive public funds to underdeveloped sections or countries.

Research at the Massachusetts Institute of Technology (MIT)-Center for Brains, Minds, and Machines (CBMM) on the AI-Brain Interface tests human ethics to the hilt. Described as the ultimate frontier of science and technology, MIT-CBMM researchers plan to build machines that could simulate the human brain’s cognitive powers to produce intelligent behaviour. Similarly, Japan’s InBrain project envisions machines that could help in decision-making without any support from big data or databases! It sounds like fiction, but such superhuman machines are at the threshold of reality, threatening our cherished social order.

To address these human existential challenges, a global consensus has emerged to set a stringent and secure policy framework based on the evaluation of opportunities and threats due to the deployment of AI technologies. Canada (AI and Data Act), China (Interim Measures for the Management of Generative AI Services), US Congress (2023) regulatory legislation, and the European Union (EU) are ready to implement a set of restrictive mandates. Noteworthy is the EU’s proposal to address the risks of AI deployment: (1) prohibit certain types of AI systems that are potentially manipulative, exploitative, and circumvent biometric systems; (2) strict guidelines for mandatory compliance by the service providers; (3) stringent standards for high-risk systems; and (4) create a database of high-risk systems; (5) a code of conduct for AI professionals; and (6) a periodic review of the impact of technologies after market deployment. However, political scientists are sceptical that even the most well-meaning laws could be leaked to allow AI for defence and national security purposes.

The central government has initiated measures to frame legislation to prevent bias and misuse in the deployment of AI technologies; a policy draft is expected by July 2024. These exercises could draw on global best practices to craft a holistic policy, covering both market-related AI innovations as well as safe technology platforms to implement sustainability programmes. 

(The writer is a former professor and registrar at Bangalore University)

ADVERTISEMENT
(Published 02 April 2024, 04:40 IST)