<p>More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that AI tools present “profound risks to society and humanity.”</p>.<p>AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” according to the letter, which was released Wednesday by the nonprofit group Future of Life Institute.</p>.<p>Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and candidate in the 2020 US presidential election; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock.</p>.<p><strong>Also Read | <a href="https://www.deccanherald.com/business/business-news/soon-only-verified-accounts-to-vote-in-twitter-polls-1204247.html" target="_blank">Soon, only verified accounts to vote in Twitter polls</a></strong></p>.<p>“These things are shaping our world,” said Gary Marcus, an entrepreneur and academic who has long complained of flaws in AI systems, in an interview. “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”</p>.<p>AI powers chatbots like ChatGPT, Microsoft’s Bing and Google’s Bard, which can perform humanlike conversations, create essays on an endless variety of topics and perform more complex tasks, like writing computer code.</p>.<p>The push to develop more powerful chatbots has led to a race that could determine the next leaders of the tech industry. But these tools have been criticized for getting details wrong and their ability to spread misinformation.</p>.<p>The open letter called for a pause in the development of AI systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Musk co-founded. The pause would provide time to implement “shared safety protocols” for AI systems, the letter said.</p>.<p>“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added.</p>.<p>Development of powerful AI systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.</p>.<p>“Humanity can enjoy a flourishing future with AI,” the letter said. “Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.”</p>.<p>Sam Altman, the CEO of OpenAI, did not sign the letter.</p>.<p>Marcus and others believe that convincing the wider tech community to agree to a moratorium would be difficult. But swift government action is also a slim possibility, because lawmakers have done little to regulate artificial intelligence.</p>.<p>Politicians in the United States don’t have much of an understanding of the technology, Rep. Jay Obernolte, R-Calif., recently told The New York Times. In 2021, European Union policymakers proposed a law designed to regulate AI technologies that might create harm, including facial recognition systems.</p>.<p>Expected to be passed as soon as this year, the measure would require companies to conduct risk assessments of AI technologies to determine how their applications could affect health, safety and individual rights.</p>.<p>GPT-4 is what AI researchers call a neural network, a type of mathematical system that learns skills by analyzing data. A neural network is the same technology that digital assistants like Siri and Alexa use to recognize spoken commands, and self-driving cars use to identify pedestrians.</p>.<p>Around 2018, companies like Google and OpenAI began building neural networks that learned from massive amounts of digital text, including books, Wikipedia articles, chat logs and other information culled from the internet. The networks are called large language models, or LLMs.</p>.<p>By pinpointing billions of patterns in all that text, the LLMs learn to generate text on their own, including tweets, term papers and computer programs. They could even carry on a conversation. Over the years, OpenAI and other companies have built LLMs that learn from more and more data.</p>.<p>This has improved their capabilities, but the systems still make mistakes. They often get facts wrong and will make up information without warning, a phenomenon that researchers call “hallucination.” Because the systems deliver all information with what seems like complete confidence, it is often difficult for people to tell what is right and what is wrong.</p>.<p>Experts are worried that bad actors could use these systems to spread disinformation with more speed and efficiency than was possible in the past. They believe that these could even be used to coax behavior from people across the internet.</p>.<p>Before GPT-4 was released, OpenAI asked outside researchers to test dangerous uses of the system. The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online, describe ways to make dangerous substances from household items and write Facebook posts to convince women that abortion is unsafe.</p>.<p>They also found that the system was able to use Task Rabbit to hire a human across the internet and defeat a Captcha test, which is widely used to identify bots online. When the human asked if the system was “a robot,” the system said it was a visually impaired person.</p>.<p>After changes by OpenAI, GPT-4 no longer does these things.</p>.<p>For years, many AI researchers, academics and tech executives, including Musk, have worried that AI systems could cause even greater harm. Some are part of a vast online community called rationalists or effective altruists who believe that AI could eventually destroy humanity.</p>.<p>The letter was shepherded by the Future of Life Institute, an organization dedicated to researching existential risks to humanity that has long warned of the dangers of artificial intelligence. But it was signed by a wide range of people from industry and academia.</p>.<p>Although some who signed the letter are known for repeatedly expressing concerns that AI could destroy humanity, others, including Marcus, are more concerned about its near-term dangers, including the spread of disinformation and the risk that people will rely on these systems for medical and emotional advice.</p>.<p>The letter “shows how many people are deeply worried about what is going on,” said Marcus, who signed the letter. He believes the letter will be an important turning point. “It think it is a really important moment in the history of AI — and maybe humanity,” he said.</p>.<p>He acknowledged, however, that those who have signed the letter may find it difficult to convince the wider community of companies and researchers to put a moratorium in place. “The letter is not perfect,” he said. “But the spirit is exactly right.”</p>
<p>More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that AI tools present “profound risks to society and humanity.”</p>.<p>AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” according to the letter, which was released Wednesday by the nonprofit group Future of Life Institute.</p>.<p>Others who signed the letter include Steve Wozniak, a co-founder of Apple; Andrew Yang, an entrepreneur and candidate in the 2020 US presidential election; and Rachel Bronson, the president of the Bulletin of the Atomic Scientists, which sets the Doomsday Clock.</p>.<p><strong>Also Read | <a href="https://www.deccanherald.com/business/business-news/soon-only-verified-accounts-to-vote-in-twitter-polls-1204247.html" target="_blank">Soon, only verified accounts to vote in Twitter polls</a></strong></p>.<p>“These things are shaping our world,” said Gary Marcus, an entrepreneur and academic who has long complained of flaws in AI systems, in an interview. “We have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation and a huge number of unknowns.”</p>.<p>AI powers chatbots like ChatGPT, Microsoft’s Bing and Google’s Bard, which can perform humanlike conversations, create essays on an endless variety of topics and perform more complex tasks, like writing computer code.</p>.<p>The push to develop more powerful chatbots has led to a race that could determine the next leaders of the tech industry. But these tools have been criticized for getting details wrong and their ability to spread misinformation.</p>.<p>The open letter called for a pause in the development of AI systems more powerful than GPT-4, the chatbot introduced this month by the research lab OpenAI, which Musk co-founded. The pause would provide time to implement “shared safety protocols” for AI systems, the letter said.</p>.<p>“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added.</p>.<p>Development of powerful AI systems should advance “only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.</p>.<p>“Humanity can enjoy a flourishing future with AI,” the letter said. “Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all and give society a chance to adapt.”</p>.<p>Sam Altman, the CEO of OpenAI, did not sign the letter.</p>.<p>Marcus and others believe that convincing the wider tech community to agree to a moratorium would be difficult. But swift government action is also a slim possibility, because lawmakers have done little to regulate artificial intelligence.</p>.<p>Politicians in the United States don’t have much of an understanding of the technology, Rep. Jay Obernolte, R-Calif., recently told The New York Times. In 2021, European Union policymakers proposed a law designed to regulate AI technologies that might create harm, including facial recognition systems.</p>.<p>Expected to be passed as soon as this year, the measure would require companies to conduct risk assessments of AI technologies to determine how their applications could affect health, safety and individual rights.</p>.<p>GPT-4 is what AI researchers call a neural network, a type of mathematical system that learns skills by analyzing data. A neural network is the same technology that digital assistants like Siri and Alexa use to recognize spoken commands, and self-driving cars use to identify pedestrians.</p>.<p>Around 2018, companies like Google and OpenAI began building neural networks that learned from massive amounts of digital text, including books, Wikipedia articles, chat logs and other information culled from the internet. The networks are called large language models, or LLMs.</p>.<p>By pinpointing billions of patterns in all that text, the LLMs learn to generate text on their own, including tweets, term papers and computer programs. They could even carry on a conversation. Over the years, OpenAI and other companies have built LLMs that learn from more and more data.</p>.<p>This has improved their capabilities, but the systems still make mistakes. They often get facts wrong and will make up information without warning, a phenomenon that researchers call “hallucination.” Because the systems deliver all information with what seems like complete confidence, it is often difficult for people to tell what is right and what is wrong.</p>.<p>Experts are worried that bad actors could use these systems to spread disinformation with more speed and efficiency than was possible in the past. They believe that these could even be used to coax behavior from people across the internet.</p>.<p>Before GPT-4 was released, OpenAI asked outside researchers to test dangerous uses of the system. The researchers showed that it could be coaxed into suggesting how to buy illegal firearms online, describe ways to make dangerous substances from household items and write Facebook posts to convince women that abortion is unsafe.</p>.<p>They also found that the system was able to use Task Rabbit to hire a human across the internet and defeat a Captcha test, which is widely used to identify bots online. When the human asked if the system was “a robot,” the system said it was a visually impaired person.</p>.<p>After changes by OpenAI, GPT-4 no longer does these things.</p>.<p>For years, many AI researchers, academics and tech executives, including Musk, have worried that AI systems could cause even greater harm. Some are part of a vast online community called rationalists or effective altruists who believe that AI could eventually destroy humanity.</p>.<p>The letter was shepherded by the Future of Life Institute, an organization dedicated to researching existential risks to humanity that has long warned of the dangers of artificial intelligence. But it was signed by a wide range of people from industry and academia.</p>.<p>Although some who signed the letter are known for repeatedly expressing concerns that AI could destroy humanity, others, including Marcus, are more concerned about its near-term dangers, including the spread of disinformation and the risk that people will rely on these systems for medical and emotional advice.</p>.<p>The letter “shows how many people are deeply worried about what is going on,” said Marcus, who signed the letter. He believes the letter will be an important turning point. “It think it is a really important moment in the history of AI — and maybe humanity,” he said.</p>.<p>He acknowledged, however, that those who have signed the letter may find it difficult to convince the wider community of companies and researchers to put a moratorium in place. “The letter is not perfect,” he said. “But the spirit is exactly right.”</p>