News center > News > Headlines > Context
"MIT Technology Review" predicts: 5 major trends in AI development in 2025
Editor
2025-01-10 12:01 2,173

Source: Quantum

"MIT Technology Review" released five major trends in the development of artificial intelligence in 2025, and excluded agents and small language models on the grounds that this It is already obvious that the next big trend. Beyond that, here are five other hot trends you should be paying attention to this year, according to the outlet. Please read on.

For the past few years, we have been predicting the future development of artificial intelligence. Considering how fast the industry is growing, this may seem like a pipe dream. But we have continued this effort and have earned a reputation for forward-thinking and reliability.

How did our last round of predictions go? The four top trends to watch out for in 2024, predicted last year, include: What we call custom chatbots — interactive assistant applications powered by multimodal large-scale language models (we didn’t know it yet, but what we’re talking about What everyone is calling agents these days is the biggest buzz in artificial intelligence right now); generating video (few technologies have advanced so fast in the past 12 months, with OpenAI and Google DeepMind in one week in December have released their flagship video generation models Sora and Veo); and more general-purpose robots that can perform a wider range of tasks (gains from large language models continue to trickle into other areas of the tech industry, robotics chief among them).

We also said that AI-generated election disinformation would be everywhere, but fortunately, we were wrong. There have been many things to worry about this year, but deepfakes have been rare.

So what happens in 2025? We’ll ignore the obvious here: what’s certain is that agents and smaller, more efficient language models will continue to shape the industry. Here are five more hot trends you should be paying attention to this year.

1. Generating Virtual Playgrounds

If 2023 is the year of generating images and 2024 is the year of generating videos, what’s next? If you guessed it was generated virtual worlds (aka video games), then let’s high-five it.

In February 2024, Google DeepMind released a generative model called Genie, which can take static images and convert them into a side-scrolling two-dimensional platform game that players can interact with. Have a preliminary understanding of technology. In December, the company released Genie 2, a model that can transform an initial image into an entire virtual world.

Other companies are developing similar technologies. In October, AI startups Decart and Etched unveiled an unofficial hack of Minecraft in which every frame in the game is generated on the fly as players play. And the worldWorld Labs is a start-up company co-founded by Li Feifei, a famous artificial intelligence scientist and known as the "Godmother of Artificial Intelligence". The company is building a so-called Large World Model (LWM). (Li Feifei is also the creator of ImageNet, the massive photo data set that launched the deep learning craze.)

One obvious application area is video games. These early experiments are full of fun, and generative 3D simulations can be used to explore design concepts for new games, instantly transforming sketches into playable environments. This could lead to a whole new type of game.

But they can also be used to train robots. World Lab hopes to develop so-called spatial intelligence - the ability of robots to interpret and interact with everyday life. But robotics researchers lack high-quality data from real-world scenarios to train this technology. Building countless virtual worlds, placing virtual robots in them, and learning through trial and error can make up for this shortcoming.

2. Large language models that can "reason"

This hot discussion is justified. When OpenAI released o1 in September, it introduced a new paradigm for how large language models work. Two months later, the company pushed the paradigm forward in almost every way with the launch of the o3 - a model that could completely reinvent the technology.

Most models, including OpenAI’s flagship product GPT-4, give the first answer they think of. Sometimes it's true; sometimes it's not. But the company's new model is trained to solve problems step by step, breaking down tough problems into a series of simpler ones. When one approach doesn't work, they try another. This technique, called "inference" (yes - we know exactly what that word means), can make the technique more accurate, especially on math, physics and logic problems.

This is also crucial for agents.

In December, Google DeepMind released an experimental new web browsing agent called Mariner. In a preview demo provided by the company, Mariner appeared to be experiencing issues. Megha Gore, the company's product manager, asked the agent to find her a recipe for Christmas cookies that looked like the ones in a photo she had given her. Mariner found a recipe online and began adding ingredients to Gore's online shopping cart.

Then it stopped because it didn't know which flour to choose. Gore watched as Mariner explained its steps in the chat window: "It said, 'I'm going to use my browser's back button to go back to the recipe.'"

It was a remarkable moment. Instead of hitting a wall, the agent breaks the task into different actions and selects one that is likely to solve the problem. Figuring out you need to hit the "back" button may sound likeComes simple, but to a mindless robot, it's rocket science. And it worked: Mariner went back to the recipe, confirmed the type of flour, and continued loading Gore's cart with flour.

Google DeepMind is also building an experimental version of its latest large-scale language model, Gemini 2.0, which takes this step-by-step approach to solving problems, called Gemini 2.0 Flash Thinking.

But OpenAI and Google are just the tip of the iceberg. Many companies are building large language models using similar techniques to make them better at tasks ranging from cooking to programming. Expect more talk about inference this year (we know, we know).

3. The booming development of artificial intelligence in science

One of the most exciting uses of artificial intelligence is to accelerate discoveries in the natural sciences. Perhaps the greatest demonstration of the potential of artificial intelligence in this regard came last October, when the Royal Swedish Academy of Sciences awarded the Nobel Prize in Chemistry to Demis Hassabis and John M. Jungper of Google DeepMind for their development of for the AlphaFold tool that can solve protein folding problems, and David Baker for developing tools that help design new proteins.

This trend is expected to continue this year, with the emergence of more data sets and models dedicated to scientific discovery. Proteins are a perfect target for AI because the field has excellent existing data sets that can be used to train AI models.

People are looking for the next big discovery. One potential area is materials science. The company Meta has released massive data sets and models that can help scientists discover new materials faster using artificial intelligence. In December, Hugging Face partnered with startup Entalpic to launch LeMaterial, an open source project designed to simplify and accelerate materials research. Their first project is a dataset to unify, clean and standardize the most important material datasets.

AI model makers are also keen to use their generative products as research tools for scientists. OpenAI lets scientists test its latest o1 model to see what it can do for their research. The results are encouraging.

Having an artificial intelligence tool that works in a scientist-like way is one of the dreams of the tech world. In a manifesto published last October, Anthropic founder Dario Amodei highlighted science, and biology in particular, as one of the key areas where powerful artificial intelligence can help. Amodei speculates that in the future, artificial intelligence may not only be a data analysis method, but also a "virtual biologist performing all the tasks of a biologist." We are still a long way from this visionGotta go. But this year, we may see a significant step toward that goal.

4. AI companies are more closely related to security

AI companies can make a lot of money if they are willing to use their tools for border surveillance, intelligence gathering and other security tasks money.

The U.S. military has launched a series of programs demonstrating its eagerness to adopt artificial intelligence, from the Replicator program, which was inspired by the war in Ukraine, to a $1 billion commitment to small drones (approximately 7.3 billion yuan) - to the "Artificial Intelligence Rapid Capability Unit" - a unit that introduces artificial intelligence into every aspect from battlefield decision-making to logistics. European militaries are under pressure to increase investment in technology amid concerns that Donald Trump will cut support for Ukraine. Growing tensions with the region also worry military planners.

In 2025, these trends will continue to be a boon to defense technology companies such as Palantir and Anduril, which are currently using classified military data to train artificial intelligence models.

The strong funds of the defense industry will also attract mainstream artificial intelligence companies to join. In December, OpenAI announced it would partner with Anduril on a program to shoot down drones, completing a year-long shift away from cooperation with the military. It joins Microsoft, Amazon and Google that have been working with the Pentagon for years.

Other AI competitors are investing billions of dollars to train and develop new models, and in 2025 they will face greater pressure to take revenue seriously. It’s possible they can find enough non-defense customers willing to pay top dollar for AI agents that can handle complex tasks, or creative industries willing to spend money on image and video generation tools.

But they will also be increasingly tempted to pursue lucrative Pentagon contracts. Whether participating in defense projects would be seen as contrary to the company's values ​​will be a difficult question for these companies. OpenAI's rationale for changing its stance is that "democracy should continue to lead the development of artificial intelligence," the company wrote, arguing that lending its models to the military would advance that goal. In 2025, we will see other companies follow its lead.

5. Nvidia sees competition coming

For much of the current AI boom, if you were a tech startup trying to make an AI model, then Jen-Hsun Huang is your perfect choice. As CEO of chip giant Nvidia, Huang helped the company become the undisputed leader in chips that can be used both to train artificial intelligence models and to perform "inference" when someone uses the model.

In 2025, several forces may change this. First, Amazon, Broadcom, AMDRivals such as Apple Inc. have been investing heavily in new chips, and there are early signs that they could compete fiercely with Nvidia's chips -- especially in inference, where Nvidia's lead is less solid.

More and more startups are also attacking NVIDIA from different angles. Rather than trying to make small improvements to Nvidia's designs, startups like Groq are making riskier bets on entirely new chip architectures that, given enough time, promise to provide more efficient or effective training. In 2025, these experiments will still be in their early stages, but it's possible that a prominent competitor will emerge that changes the assumption that top AI models rely solely on Nvidia chips.

What supports this competition is the geo-chip war. So far, the war has relied mainly on two strategies. On the one hand, the West is trying to restrict the export of top chips and their manufacturing technology to competitors. On the other hand, initiatives such as the US Chip Act aim to promote semiconductor production in the United States.

Donald Trump is likely to escalate these export controls and promise to impose massive tariffs on all imports from competitors. In 2025, such tariffs would put TSMC, which U.S. chipmakers rely heavily on, at the center of a trade war.

It’s unclear how these factors will play out, but it will only further incentivize chipmakers to reduce their reliance on TSMC, which is the entire purpose of the CHIP Act. As spending from the bill begins to circulate, this year will be the first to see whether the bill materially boosts U.S. chip production.

Keywords: Bitcoin
Share to: