<p>The tone of congressional hearings featuring tech industry executives in recent years can best be described as antagonistic. Mark Zuckerberg, Jeff Bezos and other tech luminaries have all been dressed down on Capitol Hill by lawmakers upset with their companies.</p>.<p>But on Tuesday, Sam Altman, the CEO of the San Francisco startup OpenAI, testified before members of a Senate subcommittee and largely agreed with them on the need to regulate the increasingly powerful AI technology being created inside his company and others like Google and Microsoft.</p>.<p>In his first testimony before Congress, Altman implored lawmakers to regulate artificial intelligence as members of the committee displayed a budding understanding of the technology. The hearing underscored the deep unease felt by technologists and government over AI’s potential harms. But that unease did not extend to Altman, who had a friendly audience in the members of the subcommittee.</p>.<p><strong>Read more | <a href="https://www.deccanherald.com/business/business-news/chatgpt-mistakes-japan-s-chief-ai-advocate-for-prime-minister-1218862.html" target="_blank">ChatGPT mistakes Japan’s chief AI advocate for prime minister</a></strong></p>.<p>The appearance of Altman, a 38-year-old Stanford University dropout and tech entrepreneur, was his christening as the leading figure in AI. The boyish-looking Altman traded in his usual pullover sweater and jeans for a blue suit and tie for the three-hour hearing.</p>.<p>“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening.”</p>.<p>Altman was joined at the hearing by Christina Montgomery, IBM’s chief privacy and trust officer, and Gary Marcus, a well-known professor and frequent critic of AI technology.</p>.<p>Altman said his company’s technology may destroy some jobs but also create new ones, and that it will be important for “government to figure out how we want to mitigate that.” Echoing an idea suggested by Marcus, he proposed the creation of an agency that issues licenses for the development of large-scale AI models, safety regulations and tests that AI models must pass before being released to the public.</p>.<p>“We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” Altman said.</p>
<p>The tone of congressional hearings featuring tech industry executives in recent years can best be described as antagonistic. Mark Zuckerberg, Jeff Bezos and other tech luminaries have all been dressed down on Capitol Hill by lawmakers upset with their companies.</p>.<p>But on Tuesday, Sam Altman, the CEO of the San Francisco startup OpenAI, testified before members of a Senate subcommittee and largely agreed with them on the need to regulate the increasingly powerful AI technology being created inside his company and others like Google and Microsoft.</p>.<p>In his first testimony before Congress, Altman implored lawmakers to regulate artificial intelligence as members of the committee displayed a budding understanding of the technology. The hearing underscored the deep unease felt by technologists and government over AI’s potential harms. But that unease did not extend to Altman, who had a friendly audience in the members of the subcommittee.</p>.<p><strong>Read more | <a href="https://www.deccanherald.com/business/business-news/chatgpt-mistakes-japan-s-chief-ai-advocate-for-prime-minister-1218862.html" target="_blank">ChatGPT mistakes Japan’s chief AI advocate for prime minister</a></strong></p>.<p>The appearance of Altman, a 38-year-old Stanford University dropout and tech entrepreneur, was his christening as the leading figure in AI. The boyish-looking Altman traded in his usual pullover sweater and jeans for a blue suit and tie for the three-hour hearing.</p>.<p>“I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening.”</p>.<p>Altman was joined at the hearing by Christina Montgomery, IBM’s chief privacy and trust officer, and Gary Marcus, a well-known professor and frequent critic of AI technology.</p>.<p>Altman said his company’s technology may destroy some jobs but also create new ones, and that it will be important for “government to figure out how we want to mitigate that.” Echoing an idea suggested by Marcus, he proposed the creation of an agency that issues licenses for the development of large-scale AI models, safety regulations and tests that AI models must pass before being released to the public.</p>.<p>“We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” Altman said.</p>