By Parmy Olson
When ChatGPT hit the market in November 2022, it sparked a battle for a new product category known as foundation models. These artificial intelligence systems, which generate text and images, cost tens of millions of dollars in computing power to build, and only a few companies have the resources and talent to create them.
In March, one of them was swallowed up by Microsoft Corp. — and much more quickly than anyone expected. After raising more than $1.5 billion from investors, 70 staff from Inflection transferred to Microsoft, which is paying the company $650 million in licensing fees in a deal designed to make investors whole.
Gavin Baker, a managing partner at investment firm Atreides Management, LP, tweeted that certain types of foundation models were now “the fastest depreciating assets in history.” He’s right. These capital-intensive businesses have technology so novel it can take years to figure out a viable business model. And they’re so expensive to build that future rounds of funding are likely to come from deep-pocketed investors in the Middle East. Other firms that have boldly built their own large language models, such as Anthropic, Character.ai, and Perplexity, may find themselves grappling with the same issues that beset Inflection. (ChatGPT is an exception to the rule, since most of its $1.6 billion revenue in 2023 came from subscriptions.)
The consequences of that are good for legacy tech firms, and not so good for society. Alphabet Inc.’s Google, Amazon.com Inc., and Meta Platforms Inc. are almost certainly eyeing these AI startups and looking for ways to swallow up their talent without raising the ire of antitrust regulators. Microsoft, through its unusual hiring and licensing deal with Inflection, gave them one playbook to follow. Another is investments-based partnerships: Last week, Amazon completed its $4 billion investment in Anthropic, which obliges the maker of the Claude chatbot to use Amazon’s data centers and chips.
The result of continued tie-ups like this will be an even more disconcerting concentration of power among tech giants, who continue to make major design decisions about AI behind closed doors.
Fortunately, a countervailing force is gaining momentum: open-source AI. Don’t confuse this with OpenAI, which keeps the mechanics of ChatGPT secret. Rather, this refers to those going down a different route and making the inner workings of their AI models transparent to the public and free to use.
Earlier this month, for instance, Elon Musk said his AI company xAI would open-source its chatbot, known as Grok. Open-source AI isn’t lucrative, and the quality of such models is still behind that of OpenAI and Google — but they are catching up. And perhaps it’s no surprise that many of the firms leading AI’s democratization are in France, the land of egalité and mathematical excellence.
French startups like Mistral (valued at $2 billion) and Hugging Face (valued at $4.5 billion) give away AI models for free. Although Mistral sells access to its most advanced language model through Microsoft, it has tweeted other models as magnet links to torrent sites that typically host pirated content. Meanwhile, Hugging Face (headquartered in New York but whose founders are French) provides links to various free open-source AI models and tools on its website, which it calls “the hub.” The latter company is named after the “hugging face” emoji and makes money by charging its larger enterprise customers for access to computing power and customer support, a spokeswoman tells me, adding that it now has more than 10,000 paying customers.
The basic idea of open-source generative AI is: If you’re a company that wants to integrate a chatbot into your product or internal systems, you don’t have to buy the technology from OpenAI, Microsoft, Google or Anthropic. You can just get it for free from Mistral and Hugging Face. Not only does that lower the barrier to entry for entrepreneurs, it allows them to inspect the inner workings of an AI system so they can assess its strengths and limitations, and even customize it how they see fit. That kind of flexibility ends up sparking more innovation since entrepreneurs aren’t just consuming technology from Microsoft or Google, but tailoring it to their future customers.
Some are skeptical that these efforts will make headway against the giants of Silicon Valley. “I’m pessimistic,” says one executive at a French tech company. “Open will go closed, just like the Internet did.” The Internet indeed began life as an open landscape that over time shifted toward more closed systems. Tech giants like Facebook, Amazon and Google built platforms that controlled large swaths of online content and kept people locked into their networks. But open-source systems are still fundamental to IT today. Most data centers, for instance, run on Linux or Ubuntu, which are both free, open-source platforms; a recent report by semiconductor services firm Synopsys found that 76 per cent of code scanned in codebases was open-source. That proportion could well be greater in AI.
“[Open-source AI] is really fundamental because it allows everyone to seize the technology, to diminish the fear of limited understanding or of not being qualified to use AI,” says Remi Cadene, head of robotics at Hugging Face in Paris, adding that when people can reproduce and understand AI models, they can improve them, and even make them safer. Such models aren’t just a viable alternative to closed systems, they could end up being better in the long run.
Innovation only gets hindered when AI startups find themselves on an assembly line to Big Tech. Here’s hoping the open-source movement continues to thrive.