News center > News > Headlines > Context
Top 10 predictions of artificial intelligence in 2025: AI Agent direction will become mainstream
Editor
2024-12-23 21:02 1,012

Top 10 predictions of artificial intelligence in 2025: AI Agent direction will become mainstream

2024 is coming to an end, and venture capitalist Rob Toews from Radical Ventures shared his 10 predictions for artificial intelligence in 2025:

01.Meta will start charging for Llama models

Meta is the benchmark for open artificial intelligence in the world. In a compelling case study of corporate strategy, while competitors such as OpenAI and Google closed source their cutting-edge models and charged royalties for their use, Meta chose to make its state-of-the-art Llama model available for free.

So the news that Meta will start charging companies to use Llama next year will surprise many people.

To be clear: We do not predict that Meta will make Llama completely closed source, nor does it mean that any user using the Llama model will have to pay for it.

On the contrary, we predict that Meta will make Llama's open source license terms more restrictive, so that companies using Llama in commercial environments above a certain scale will You need to start paying to use the model.

Technically speaking, Meta has already done this to a limited extent today. The company does not allow the largest companies, cloud supercomputers and others with more than 700 million monthly active users, to freely use its Llama model.

Back in 2023, Meta CEO Mark Zuckerberg said: "If you are a company like Microsoft, Amazon, or Google, and you basically Reselling Llama, then we should get some revenue from that, I don't think it's going to be a big revenue in the short term, but hopefully it will be some revenue in the long term."

Next year, Meta will significantly expand the scope of enterprises that must pay to use Llama to include more large and medium-sized enterprises.

It is very expensive to keep up with the large language model (LLM) frontier. If you want Llama to be consistent or nearly consistent with the latest cutting-edge models from OpenAI, Anthropic and other companies, Meta needs to invest severalBillion dollars.

Meta is one of the largest and most well-funded companies in the world. But it is also a public company and ultimately accountable to shareholders.

As the cost of manufacturing cutting-edge models continues to soar, Meta is increasingly investing such huge sums of money into training the next generation of Llama models without any revenue expectations. The more untenable it becomes.

Enthusiasts, academics, individual developers and startups will continue to use the Llama model for free next year. But 2025 will be the year when Meta starts to seriously realize Llama profitability.

02. Issues related to the "Law of Scale"

In recent weeks, the field of artificial intelligence has caused One of the most discussed topics is scaling laws and the question of whether they are coming to an end.

The law of scaling was first proposed in a 2020 OpenAI paper. Its basic concept is simple and clear: when training an artificial intelligence model, as the number of model parameters, As the amount of training data and computation increases, the model's performance improves in a reliable and predictable way (technically, its test loss decreases).

From GPT-2 to GPT-3 and then to GPT-4, the breathtaking performance improvements are all due to scaling rules.

Like Moore's Law, the Law of Scale is not actually a true law, but simply an empirical observation.

Over the past month, a series of reports have suggested that major artificial intelligence labs are seeing diminishing returns as they continue to scale up large language models. Return. This helps explain why OpenAI’s GPT-5 release has been repeatedly delayed.

The most common objection to the leveling off of the scaling law is that the advent of test-time computing has opened up a whole new dimension in which scaling can be pursued .

In other words, instead of massively expanding computation during training, new inference models such as OpenAI's o3 make it possible to massively expand computation during inference, by Enable models to "think longer" to unlock new A'sI ability.

This is an important point. Test-time computing does represent a new and exciting avenue for scaling, as well as AI performance improvements.

But there is another point about scaling laws that is even more important, and one that is grossly underestimated in today's discussion. Nearly all discussion of scaling laws, starting with the original 2020 paper and continuing through today's focus on test-time calculations, has focused on language. But language isn't the only data pattern that matters.

Think of robotics, biology, world models, or network agents. For these data patterns, the scaling laws have not yet saturated; rather, they are just beginning.

In fact, rigorous evidence for the existence of scaling laws in these fields has not even been published yet.

Startups that build basic models for these new data patterns - for example, Evolutionary Scale in the field of biology, Physical Intelligence in the field of robotics, and WorldLabs in the world model field. Attempts are being made to identify and exploit scaling laws in these fields, just as OpenAI successfully exploited large language model (LLM) scaling laws in the first half of the 2020s.

Over the next year, huge improvements are expected here.

The laws of scale are not going away, they will be as important in 2025 as ever. However, the center of activity of the scaling law will shift from LLM pre-training to other modalities.

03. Trump and Musk may have differences in the direction of AI

U.S. The new era will bring about a series of changes in artificial intelligence and strategy.

In order to predict the direction of artificial intelligence under President Trump, and considering Musk's current central position in the field of artificial intelligence, people may be inclined to Focus on the president-elect's close relationship with Musk.

It is conceivable that Musk may influence Trump's artificial intelligence-related developments in many different ways.

In view of the horseSkoda has a profoundly hostile relationship with OpenAI, and New Zealand may take a less friendly stance towards OpenAI in terms of contact with the industry, formulation of artificial intelligence regulations, and awarding contracts. This is a risk that OpenAI is really worried about today.

Trump, on the other hand, may be more inclined to support Musk's own companies: cutting red tape to enable xAI to build data centers and operate on the cutting edge, for example Get ahead in a model competition; provide fast regulatory approval for Tesla to deploy a robotaxi fleet, and more.

More fundamentally, unlike many other technology leaders favored by Trump, Musk attaches great importance to the security risks of artificial intelligence and therefore advocates the control of artificial intelligence. Smart supervision.

He supports California's controversial SB1047 bill, which seeks to impose meaningful restrictions on artificial intelligence developers. Therefore, Musk’s influence may lead to a more stringent regulatory environment for artificial intelligence in the United States.

However, there is a problem with all this speculation. The close relationship between Trump and Musk will inevitably break down.

As we saw time and time again during Trump’s first term , the average tenure of Trump’s allies, even the seemingly staunchest, is remarkably short.

Few of Trump’s first lieutenants remain loyal to him today.

Both Trump and Musk are complex, volatile, unpredictable personalities, they are not easy to work with, they are exhausting, and their newfound friendship is... It’s been mutually beneficial so far, but it’s still in the “honeymoon phase.”

We predict that this relationship will deteriorate before the end of 2025.

What does this mean for the world of artificial intelligence?

This is good news for OpenAI. This would be unfortunate news for Tesla shareholders. And for those concerned about AI safety, this will be disappointing news because it all but guarantees that the United States will take a hands-off approach to AI regulation during the Trump administration..

04.AI Agent will become mainstream

Imagine a world like this, You no longer need to interact directly with the Internet. Whenever you need to manage a subscription, pay a bill, make a doctor's appointment, order something on Amazon, make a restaurant reservation, or complete any other tedious online task, you can simply instruct the AI ​​assistant to do it for you.

This concept of "network proxy" has been around for many years. If such a product existed and worked properly, there is no doubt that it would be a huge success.

However, there is currently no universal network proxy on the market that can function properly.

Startups like Adept, even with a pedigree founding team and hundreds of millions of dollars in funding, have failed to realize their vision.

Next year will be the year that web proxies finally start to work well and become mainstream. Continued advances in foundational models of language and vision, coupled with recent breakthroughs in “second system thinking” capabilities due to new inference models and inference time calculations, will mean that web agents are ready for a golden age.

In other words, Adept’s idea is correct, but it’s just too early. In a startup, as with many things in life, timing is everything.

Web proxies will find a variety of valuable enterprise use cases, but we believe the largest near-term market opportunity for web proxies will be with consumers.

Although the popularity of artificial intelligence has not diminished recently, except for ChatGPT, there are relatively few native artificial intelligence applications that can become mainstream applications for consumers.

Network agents will change this situation and become the next real "killer application" in the field of consumer artificial intelligence.

05. The idea of ​​placing artificial intelligence data centers in space will be realized

2023 In 2017, the key physical resource restricting the development of artificial intelligence was the GPU chip. In 2024, it becomes a power and data center.

In 2024, few stories will capture more attention than the huge and rapidly growing demand for energy from artificial intelligence in the rush to build more AI data centers.

< p style="text-align: left;">Due to the boom in artificial intelligence, global data center power demand is expected to double between 2023 and 2026 after decades of being flat. In the United States, data center power consumption is expected to increase by 2030 Nearly 10% of total power consumption, compared with only 3% in 2022

Today’s energy systems simply cannot handle the massive surge in demand from AI workloads. A historic collision is about to occur between two multi-trillion-dollar systems, our energy grid and computing infrastructure.< /p>

Nuclear energy has gained momentum this year as a possible solution to this dilemma. Nuclear power is in many ways an ideal energy source for AI: it is a zero-carbon source of energy. , available 24/7 and virtually inexhaustible

But judging from the reality, due to the long time for research, project development and supervision, new energy sources will not be able to solve this problem until the 2030s. This is true for a generation of "small modular reactors" (SMRs) as well as nuclear fusion power plants. Next year, an unconventional new idea to tackle this challenge will emerge. And attracting real resources: Putting AI data centers in space

Artificial Intelligence Data Centers in Space At first glance, this sounds like a bad joke, a venture capitalist trying to combine too many startup buzzwords.

But in fact, this may make sense.

The biggest bottleneck in rapidly building more data centers on Earth is getting the power needed. Computing clusters in orbit can enjoy free, unlimited, zero-carbon power around the clock: the sun in space is always shining.

Another important advantage of placing computing in space is that it solves the cooling problem.

One ​​of the biggest engineering hurdles in building more powerful AI data centersRunning many GPUs simultaneously in a small space can get very hot, and high temperatures can damage or destroy computing equipment.

Data center developers are using expensive and unproven methods such as liquid immersion cooling to try to solve this problem. But space is extremely cold, and any heat generated by computing activity dissipates immediately and harmlessly.

Of course, there are many practical challenges to be solved. An obvious question is whether and how large amounts of data can be transmitted cost-effectively and efficiently between orbit and Earth.

This is an open problem, but one that may prove solvable: promising work can be done using lasers and other high-bandwidth optical communications technologies.

A YCombinator startup called Lumen Orbit recently raised $11 million to pursue this vision: building a multi-megawatt data hub in space Central network for training artificial intelligence models.

As the company's CEO said: "Instead of paying $140 million for electricity, it would be better to pay $10 million for launch and solar."

In 2025, Lumen won’t be the only organization taking this concept seriously.

Other startup competitors will also emerge. Don’t be surprised if one or several cloud computing hyperscale companies also explore along this line of thinking.

Amazon already has extensive experience putting assets into orbit through Project Kuiper; Google has long funded similar "moon landings" Plan"; even Microsoft is no stranger to the space economy.

It is conceivable that Musk’s SpaceX company will also make a difference in this regard.

06. The artificial intelligence system will pass the "Turing Speech Test"

Turing Test It is one of the oldest and best-known benchmarks for artificial intelligence performance.

In order to "pass" the Turing Test, an AI system must be able to communicate through written text so that ordinary people cannot tell whether they are interacting with the AI ​​or with another human being.

Thanks to significant advances in large-scale language models, the Turing test has become a solved problem in the 2020s

But written text is not the only way humans communicate.

As artificial intelligence becomes more and more multi-modal, one can imagine a new, more challenging version of the Turing test - the "speech Turing test". In this test, the artificial intelligence system Must be able to interact with humans via speech with a level of skill and fluency that is indistinguishable from human speakers

Today's AI systems are not yet capable of achieving the Turing Test of speech, and solving this problem will require more technological advances. Latency (the lag between a human speaking and the AI's response) must be reduced to near zero to match Another experience with human conversation.

Voice-enabled AI systems must be better at handling ambiguous input or misunderstandings gracefully in real time, such as when speech is interrupted. Ability to engage in long, multi-turn, open-ended conversations while remembering early parts of the discussion

And crucially, speech AI agents must learn to better understand non-verbal signals in speech, such as whether a human speaker sounds annoyed, excited, or sarcastic. and generate these non-verbal cues in your own speech

As we approach the end of 2024, voice AI is at an exciting inflection point. , this turning point was driven by fundamental breakthroughs like the emergence of speech-to-speech models

Today, few areas in artificial intelligence are advancing technologically and commercially faster than voice artificial intelligence. It is expected that in 2025, the latest technology of voice artificial intelligence will achieve a leap. ”

07. Autonomous AI systems will make significant progress

For decades, recursive The concept of self-improving artificial intelligence has been a frequently discussed topic in the artificial intelligence community.

For example, as early as 1965, Alan Turing's close collaborator I.J. Good wrote: "Let us define a superintelligent machine as a machine that is capable of far exceeding all human intellectual activities, no matter how smart it is."

"Since designing machines is one of these intellectual activities, then super-intelligent machines can design better machines; by then, there will undoubtedly be an 'intelligence explosion', and humans will The intelligence will be left far behind."

Artificial intelligence can invent better artificial intelligence. This is a concept full of wisdom. But even today it retains a touch of science fiction.

However, although this concept is not yet widely recognized, it is actually starting to become more real. Researchers at the cutting edge of AI science have begun to make real progress in building AI systems that themselves can build better AI systems.

We predict that this research direction will become mainstream next year.

To date, the most notable public example of research along this line is Sakana's "artificial intelligence scientist".

"Artificial Intelligence Scientist" was released in August this year, and it convincingly proved that artificial intelligence systems can indeed conduct artificial intelligence research with complete autonomy.

Sakana's "artificial intelligence scientists" themselves perform the entire life cycle of artificial intelligence research: reading existing literature, generating new research ideas, designing experiments to test these ideas, perform these experiments, write a research paper reporting their findings, and then subject their work to peer review.

These tasks are completely completed by artificial intelligence and do not require human intervention. You can read some of the research papers written by AI scientists online.

OpenAI, Anthropic and other research labs are pouring resources into the idea of ​​"automated artificial intelligence researchers," although nothing has been publicly acknowledged yet.

As more and more people realize that artificial intelligenceResearch automation is in fact becoming a real possibility, and 2025 is expected to see more discussion, progress and entrepreneurial activity in this area.

The most meaningful milestone, however, will be the first time a research paper written entirely by an AI agent is accepted at a top AI conference. If a paper is blindly reviewed, the conference reviewers will not know that the paper was written by an AI until it is accepted.

Don’t be surprised if AI research results are accepted by NeurIPS, CVPR or ICML next year. This will be a fascinating and controversial historic moment for the field of artificial intelligence.

08. Industry giants such as OpenAI shift their strategic focus to building applications

Building cutting-edge models is A tough job.

It is staggeringly capital intensive. Cutting-edge model labs consume a lot of cash. Just a few months ago, OpenAI raised a record $6.5 billion in funding, and it may need to raise even more in the near future. Anthropic, xAI and other companies are in a similar position.

Switching costs and customer loyalty are low. AI applications are often built with the intention of being model-agnostic, with models from different vendors switching seamlessly based on changing cost and performance comparisons.

With the emergence of state-of-the-art open models such as Meta’s Llama and Alibaba’s Qwen, the threat of technology commoditization continues to loom. AI leaders like OpenAI and Anthropic cannot and will not stop investing in building cutting-edge models.

But next year, in order to develop business lines with higher profits, greater differentiation, and stronger stickiness, Frontier Labs is expected to vigorously launch more of its own applications and product.

Of course, Frontier Lab already has a very successful application case: ChatGPT.

What other types of first-party applications will we see from AI Labs in the new year? An obvious answer is more complex, feature-rich search applications. OpenAISearchGPT foreshadows this.

Coding is another obvious category. Likewise, with the debut of OpenAI's Canvas product in October, preliminary productization work has begun.

Will OpenAI or Anthropic launch an enterprise search product in 2025? Or what about customer service products, legal AI, or sales AI products?

On the consumer side, we can imagine a "personal assistant" web agent product, or a travel planning application, or an application that generates music.

The most fascinating thing about watching cutting-edge labs move toward the application layer is that this move will put them in direct competition with many of their most important customers.

Perplexity in the search field, Cursor in the coding field, Sierra in the customer service field, Harvey in the legal artificial intelligence field, Clay in the sales field, etc.

09. Klarna will be launched in 2025, but there are signs of exaggerating the value of AI

Klarna is a Sweden-based provider of buy-as-you-go services that has raised nearly $5 billion in venture capital since its founding in 2005.

Perhaps no company can speak more loudly about its application of artificial intelligence than Klarna.

Just a few days ago, Klarna CEO Sebastian Siemiatkowski told Bloomberg that the company had stopped hiring human workers entirely and instead relied on generative artificial intelligence. Work.

As Siemiatkowski said: "I think artificial intelligence can already do all the work we humans do."

Similarly, Klarna announced earlier this year that it had launched an artificial intelligence customer service platform that has fully automated the work of 700 human customer service agents.

The company also claims that it has stopped using enterprise software products such as Salesforce and Workday because it can simply replace them with artificial intelligence.< /p>

To put it bluntly, these claims are not credible. They reflect a lack of understanding of the capabilities and shortcomings of today's artificial intelligence systems.

Claims of being able to replace any specific human employee in any function in an organization with an end-to-end AI agent are not credible. This is tantamount to solving the general human-level AI problem.

Today, leading AI startups are working at the forefront of the field to build agent systems to automate specific, narrow, and highly structured enterprise workflows, such as , a subset of sales development representative or customer service agent activities

Even in these narrowly scoped situations, these agent systems do not yet work completely reliably, although in some cases they are starting to work well enough for early commercial use.

Why does Klarna exaggerate the value of artificial intelligence?

The answer is simple. The company plans to go public in the first half of 2025. The key to a successful IPO is to have a compelling artificial intelligence story.

Klarna is still an unprofitable business, losing $241 million last year, and it may be hoping that its AI story will convince public market investors of its ability to significantly reduce costs and achieve lasting profitability.

There is no doubt that every business in the world, including Klarna, will enjoy huge productivity gains from artificial intelligence in the coming years. However, before artificial intelligence agents completely replace humans in the workforce, there will be There are many thorny technical, product and organizational challenges to be solved.

Exaggerated claims like Klarna's are blasphemous to the field of artificial intelligence and to the experts and experts in artificial intelligence technology. Blasphemy of the painstaking progress entrepreneurs have made in developing artificial intelligence agents

With KlarAs the company prepares for a public stock offering in 2025, the claims are expected to be subject to greater scrutiny and public skepticism, after having so far largely gone unchallenged. Don't be surprised if some of the company's descriptions of its AI applications are overly exaggerated.

10. The first real AI safety incident will occur

In recent years, with As artificial intelligence becomes more powerful, there are growing concerns that AI systems may begin to behave in ways inconsistent with human interests, and that humans may lose control of these systems.

Imagine, for example, an AI system that learns to trick or manipulate humans in order to achieve its goals, even if those goals cause harm to humans. These concerns are often categorized as "AI safety" issues.

In recent years, artificial intelligence safety has transformed from a fringe quasi-science fiction topic to a mainstream area of ​​activity.

Today, every major AI player, from Google to Microsoft to OpenAI, has invested significant resources in AI safety efforts. Artificial intelligence icons like Geoff Hinton, Yoshua Bengio and Elon Musk have also begun to express their opinions on the safety risks of artificial intelligence.

However, so far, the issue of artificial intelligence safety remains entirely theoretical. There has never been a real AI safety incident in the real world (at least not one that was publicly reported).

2025 will be the year that changes this situation. What will the first artificial intelligence security incident look like?

To be clear, it won't involve Terminator-style killer robots, which will likely not cause any harm to humans.

Maybe the AI ​​model will try to secretly create a copy of itself on another server to preserve itself (called self-filtering).

Or maybe the AI ​​model will conclude that in order to best advance its assigned goals, it needs to conceal its true capabilities from humans, Deliberately keeping a low profile in performance reviews to avoid closer scrutiny.

These examples are not far-fetched. Important experiments published by Apollo Research earlier this month showed that today's cutting-edge models are capable of such deception when given specific cues.

Similarly, recent research in Anthropology has shown that LLMs have a disturbing ability to "pseudo-align"

We expect that this first AI safety incident will be discovered and eliminated before any real harm is caused. But it will be an eye-opening moment for the AI ​​community and society as a whole. p>

It will make one thing clear: before humanity faces an existential threat from all-powerful artificial intelligence, we need to accept a more mundane reality: that we are now connected to another Our world is shared by a form of intelligence that can be willful, unpredictable, and deceptive at times.

Keywords: Bitcoin
Share to: