Is the tech industry already in an artificial intelligence recession? | Technology News

Demis Hasbis, one of the world’s most influential artificial intelligence experts, has a warning for the rest of the tech industry: Don’t expect chatbots to improve as quickly as they have in the past few years.

AI researchers have long relied on a fairly simple concept to improve their systems: The more data collected from the Internet they pumped into large language models — the technology behind chatbots — the better those systems performed.

But Hassabis, who oversees Google DeepMind, the company’s primary AI lab, says the method has yet to be used because tech companies are running out of data.

“Everyone in the industry is seeing diminishing returns,” Hasbis said in an interview with the New York Times this month as he prepared to accept the Nobel Prize for his work in AI.

The Hassabis AI expert isn’t just warning of a slowdown. Interviews with 20 executives and researchers revealed a widespread belief that the tech industry is running into a problem many thought was unimaginable a few years ago: they have consumed most of the digital texts available on the Internet.

Even with billions of dollars being spent on AI development, that problem is beginning to surface. On Tuesday, Databricks, an AI data company, said it was closing on $10 billion in funding — the largest private funding round ever for a startup. And tech giants are signaling they have no plans to reduce their spending on massive data centers running AI systems.

Not everyone in the AI ​​world is worried. Some, including Sam Altman, CEO of OpenAI, say that even with some twists on older technologies, progress will continue at the same pace. Dario Amodei, CEO of AI startup Anthropic, and Jensen Huang, CEO of Nvidia, are also excited.

(The Times has filed a lawsuit against OpenAI, claiming copyright infringement of news content related to the AI ​​system. OpenAI has denied the claims.)

The roots of the debate were discovered in 2020 by Jared Kaplan, a theoretical physicist at Johns Hopkins University, who published a research paper showing that large language models continue to grow more powerful and dynamic as more data are analyzed.

The researchers called Kaplan’s findings a “scaling law.” As students learned more by reading more books, AI systems improved as they ingested large amounts of digital text from the Internet, including news articles, chat logs, and computer programs. Seeing the raw power of this phenomenon, companies like OpenAI, Google and Meta rushed to get as much internet data as possible, ignoring corporate policies and debating whether they should skirt the law. Year by Times.

This was the modern equivalent of Moore’s Law, an often-quoted maxim coined in the 1960s by Intel co-founder Gordon Moore. He showed that the number of transistors on a silicon chip doubles every two years or so, as the power of the world’s computers continues to increase. Moore’s Law held for 40 years. But, finally, it started getting late.

The problem is this: neither scaling laws nor Moore’s Law are immutable laws of nature. They are just smart observations. One was imprisoned for decades. Others may have a very short shelf life. Google and Kaplan’s new employer, Anthropic, can’t throw more text at their AI systems because there’s little text left to throw at them.

“The past three or four years have had extraordinary returns as scaling laws go,” Hasbis said. “But we’re not making the same progress anymore.”

Hasbis said existing technologies will continue to improve AI in some ways. But he believed entirely new ideas were needed to reach the goal pursued by Google and many others: a machine that could match the power of the human brain.

Ilya Sutskever, who was instrumental in pushing the industry to think big as a researcher at both Google and OpenAI before leaving OpenAI to create a new startup this past spring, made the same point in a speech last week. “We have achieved peak data, and there will be none,” he said. “We have to deal with the data we have. There is only one internet.”

Hasbis and others are taking a different approach. They are developing ways to learn large language models through their own trial and error. By working through different math problems, for example, language models can learn which methods give correct answers and which don’t. In essence, models are trained on data that they generate themselves. The researchers called this “synthetic data.”

OpenAI recently released a new system called OpenAI o1 that was built this way. But this method only works in fields like mathematics and computing programming, where there is a firm distinction between right and wrong.

Even in these areas, AI systems have a way of making mistakes and making things up. This could hamper efforts to build AI “agents” that can write their own computer programs and take actions on behalf of Internet users, which experts see as one of AI’s most important skills.

Sorting through the vast expanse of human knowledge is even more difficult.

“These methods only work in areas where things are empirically true, like math and science,” said Dylan Patel, principal analyst at SemiAnalysis, a research firm that closely follows the growth of AI technologies. “Humanities and art, ethical and philosophical problems are very difficult.”

People like Altman say these new technologies will continue to advance technology. But if progress reaches a plateau, the implications could be far-reaching, even for Nvidia, which has become one of the world’s most valuable companies thanks to the AI ​​boom.

During a call with analysts last month, Huang was asked how the company is helping customers work through a potential downturn and what impact it might have on its business. He said the evidence shows there are still gains to be made, but businesses are also testing new processes and technologies on AI chips.

“As a result of that, the demand on our infrastructure is really big,” Huang said.

Although he is confident about Nvidia’s prospects, some of the company’s biggest customers admit that they should prepare for the possibility that AI may not advance as quickly as expected.

“We have to fight it. Is this thing real?” said Rachel Peterson, Meta’s vice president of data centers. “That’s a big question because all the dollars are being thrown at it across the board.”

This article originally appeared The New York Times.

Why should you buy our membership?

You want to be the smartest in the room.

You want access to our award-winning journalism.

You don’t want to be confused and misinformed.

Choose your subscription package

Leave a Comment