ADVERTISEMENT
Four takeaways on the race to amass data for AIThe success of AI depends on data. That’s because AI models become more accurate and more humanlike with more data.
International New York Times
Last Updated IST
<div class="paragraphs"><p>Over the last 18 months, it has become increasingly clear that digital data is also crucial in the development of artificial intelligence. </p></div>

Over the last 18 months, it has become increasingly clear that digital data is also crucial in the development of artificial intelligence.

Credit: Reuters Photo 

Online data has long been a valuable commodity. For years, Meta and Google have used data to target their online advertising. Netflix and Spotify have used it to recommend more movies and music. Political candidates have turned to data to learn which groups of voters to train their sights on.

ADVERTISEMENT

Over the last 18 months, it has become increasingly clear that digital data is also crucial in the development of artificial intelligence. Here’s what to know.

The more data, the better

The success of AI depends on data. That’s because AI models become more accurate and more humanlike with more data.

In the same way that a student learns by reading more books, essays and other information, large language models — the systems that are the basis of chatbots — also become more accurate and more powerful if they are fed more data.

Some large language models, such as OpenAI’s GPT-3, released in 2020, were trained on hundreds of billions of “tokens,” which are essentially words or pieces of words. More recent large language models were trained on more than 3 trillion tokens.

Online data is a precious and finite resource

Tech companies are using up publicly available online data to develop their AI models, faster than new data is being produced. According to one prediction, high-quality digital data will be exhausted by 2026.

Tech companies are going to great lengths to obtain more data

In the race for more data, OpenAI, Google and Meta are turning to new tools, changing their terms of service and engaging in internal debates.

At OpenAI, researchers created a program in 2021 that converted the audio of YouTube videos into text and then fed the transcripts into one of its AI models, going against YouTube’s terms of service, people with knowledge of the matter said.

(The New York Times has sued OpenAI and Microsoft for using copyrighted news articles without permission for AI development. OpenAI and Microsoft have said they used news articles in transformative ways that did not violate copyright law.)

Google, which owns YouTube, also used YouTube data to develop its AI models, wading into a legal gray area of copyright, people with knowledge of the action said. And Google revised its privacy policy last year so it could use publicly available material to develop more of its AI products.

At Meta, executives and lawyers last year debated how to get more data for AI development and discussed buying a major publisher like Simon & Schuster. In private meetings, they weighed the possibility of putting copyrighted works into their AI model, even if it meant they would be sued later, according to recordings of the meetings, which were obtained by the Times.

One solution may be ‘synthetic’ data

OpenAI, Google and other companies are exploring using their AI to create more data. The result would be what is known as “synthetic” data. The idea is that AI models generate new text that can then be used to build better AI.

Synthetic data is risky because AI models can make errors. Relying on such data can compound those mistakes.

ADVERTISEMENT
(Published 08 April 2024, 11:39 IST)