ADVERTISEMENT
What's next in Artificial Intelligence?Of 175,072 AI patents filed between 2012 and 2022, more than half were filed in the past three years, according to Deutsche Bank which further predicts that in 2024 and 2025, sharp increases in companies using AI for human resources, marketing, and sales.
International New York Times
Last Updated IST
<div class="paragraphs"><p>Representative image depicting artificial intelligence.</p></div>

Representative image depicting artificial intelligence.

Credit: iStock Photo

Mustafa Suleyman remembers the epochal moment he grasped artificial intelligence's potential. It was 2016— Paleolithic times by AI standards— and DeepMind, the company he had co-founded that was acquired by Google in 2014, had pitted its AI machine, AlphaGo, against a world champion of Go, the confoundingly difficult strategy game.

ADVERTISEMENT

AlphaGo zipped through thousands of permutations, making fast work of the hapless human. Stunned, Suleyman realized the machine had "seemingly superhuman insights," he says in his book on AI, "The Coming Wave."

The result is no longer stunning— but the implications are. Little more than a year after OpenAI's ChatGPT software helped bring generative AI into the public consciousness, companies, investors and regulators are grappling with how to shape the very technology designed to outsmart them.

The exact risks of the technology are still being debated, and the companies that will lead it are yet to be determined. But one point of agreement: AI is transformative. "The level of innovation is very hard for people to imagine," said Vinod Khosla, founder of the Silicon Valley venture capital firm Khosla Ventures, which was one of the first investors in OpenAI. "Pick an area: books, movies, music, products, oncology. It just doesn't stop."

If 2023 was the year the world woke up to AI, 2024 might be the year in which its legal and technical limits will be tested, and perhaps breached. DealBook spoke with AI experts about the real-world effects of this shift and what to expect next year.

Judges and lawmakers will increasingly weigh in.

The flood of AI regulations in recent months is likely to come under scrutiny. That includes President Joe Biden's executive order in October, which, if Congress ratifies, could compel companies to ensure that their AI systems cannot be used to make biological or nuclear weapons; embed watermarks on AI-generated content; and to disclose foreign clients to the government.

At the AI Safety Summit in Britain in November, 28 countries, including China— though not Russia— agreed to collaborate to prevent "catastrophic risks."

And in marathon negotiations in December, the European Union drafted one of the world's first comprehensive attempts to limit the use of artificial intelligence, which, among other provisions, restricts facial recognition and deepfakes and defines how businesses can use AI. The final text is due in early 2024, and the bloc's 27 member countries hope to approve it before European Parliament elections in June.

With that, Europe might effectively create global AI rules, requiring any company that does business in its market, of 450 million people, to cooperate. "It makes life tough for innovators," said Matt Clifford, who helped organize the AI summit in Britain. "They have to think about complying with a very long list of things people in Brussels are worried about."

There are plenty of concerns, including about AI's potential to replace large numbers of jobs and to reinforce existing racial biases.

Some fear overloading AI businesses with regulations.

Clifford believes existing fraud and consumer-protection laws make some portions of Europe's legislation, the AI Act, redundant. But the EU's lead architect, Dragos Tudorache, said Europe "wasn't aiming to be global regulators," and that he maintained close dialogue with members of the US Congress during the negotiations. "I am convinced we have to stay in sync as much as possible," he said.

Governments have good reason to address AI: Even simple tools can serve dark purposes. "The microphone enabled both the Nuremberg rallies and the Beatles," wrote Suleyman, who is now CEO of Inflection AI, a startup he co-founded last year with Reid Hoffman, a co-founder of LinkedIn. He fears that AI could become "uncontained and uncontainable" once it outsmarts humans. He says, "Homo technologicus could end up being threatened by its own creation."

AI capabilities will soar.

It's hard to know when that tipping point might arrive. Jensen Huang, the co-founder and CEO of Nvidia, whose dominance of AI chips has seen its share price more than triple since January 1, told the DealBook Summit in late November that "there's a whole bunch of things that we can't do yet."

Khosla believes the key AI breakthrough in 2024 will be "reasoning," allowing machines to produce far more accurate results, and that in 2025, "AI will win in reasoning against intelligent members of the community." AI machines will be steadily more capable of working through several logical steps, and performing probabilistic thinking, such as identifying a disease based on specific data, Khosla said.

Exponential growth in computational power, which hugely increases the capability of AI machines, factors into those predictions. "In 2024, it will be between 10 and 100 times more than current-day models," Clifford said. "We don't actually know what kind of innovations that's going to result in."

One new tool could be generative audio that allows users to deliver speeches in, say, Biden's voice or to generate rap songs, opera or Ludwig van Beethoven's nonexistent 10th symphony. DeepMind and YouTube have partnered with musicians to create AI tools allowing artists to insert instruments, transform musical styles or compose a melody from scratch.

Billions in investments will be needed.

None of this will come cheap, and the question now is which companies will be able to build truly sustainable AI businesses. Of 175,072 AI patents filed between 2012 and 2022, more than half were filed in the past three years, according to Deutsche Bank. In 2024 and 2025, the bank predicts sharp increases in companies using AI for human resources, marketing, sales and product development. That is already happening: Legal firms, for example, have begun using AI-generated contracts, cutting out hours of work for lawyers. "The time is ripe for an explosion of AI innovation," it predicted last May.

As those innovations roll out, fundraising has ramped up. The French AI startup Mistral AI— considered a European contender to OpenAI— raised over $500 million in 2023. More than $200 million came from Silicon Valley venture capital giant Andreessen Horowitz in a funding round that valued Mistral, just seven months old, at $2 billion.

But that might not be enough to create a general-purpose AI system of the kind that powers ChatGPT and that Mistral has in mind. "It's becoming clear the vast sums of money you need to be competitive," Clifford said. "If you want to build a general-purpose model, it may be that the amount of capital needed is so great, it makes it very tricky for traditional venture capital."

The story could be different for AI tools that serve a specific purpose, a category that spawned hundreds of startups in 2023.

After a sharp downturn last year, AI venture funding is rising fast, with most invested in US companies. Khosla said that this year he had backed 30 AI startups, including in India, Japan, Britain and Spain, companies that he said "are not afraid of the Big Tech guy." He expects AI funding to continue rising through at least 2024.

"Every country wants to be in the game," he said, and added, "That will accelerate the money flow, and the number of startups will keep accelerating."

ADVERTISEMENT
(Published 04 January 2024, 08:39 IST)