News center > News > Headlines > Context
IOSG|The convergence of games, AI agents and cryptocurrencies
Editor
3 hours ago 1,629

IOSG|The convergence of games, AI agents and cryptocurrencies

Author|Sid @IOSG

The current situation of Web3 games

Web3 gaming as an industry has taken a back seat to both primary and public market narratives as newer and more attention-grabbing narratives emerge. According to Delphi's 2024 report on the game industry, the cumulative financing amount of Web3 games in the primary market is less than US$1 billion. This is not necessarily a bad thing, it just shows that the bubble has subsided and the current capital may be compatible with higher quality games. The following figure is an obvious indicator:

Throughout 2024, Ronin and other game ecosystems The number of users of the system has soared significantly, and thanks to the emergence of high-quality new games such as Fableborn, it is almost comparable to the glory days of Axie in 2021.

The game ecosystem (L1s, L2s, RaaS) is becoming more and more like Web3 Steam, they control distribution within the ecosystem, and that becomes an incentive for game developers to develop games in those ecosystems because it helps them acquire players. According to their previous reports, user acquisition costs for Web3 games are approximately 70% higher than for Web2 games.

Player stickiness

Retaining players is as important as attracting players, or even more important. Although there is a lack of data on player retention rates for Web3 games, player retention rates are closely related to the concept of "Flow" (a term coined by Hungarian psychologist Mihaly Csikszentmihalyi).

"Flow state" is a psychological concept in which players achieve a perfect balance between challenge and skill level. It's like "getting in the zone" - time seems to fly by and you're completely immersed in the game.

Games that continuously create flow states tend to have higher retention rates because of the following reasons Mechanism:

#Advanced design

Early game: simple challenge, build confidence

Mid-game: gradually increasing difficulty

Late-game: complex challenges, mastering the game

As players improve their skills, this detailed difficulty adjustment allows them to stay within their own pace

#Participation cycle

Short term: immediate feedback (kills, points, rewards)

Mid-term: level completion, daily tasks

Long-term: character development, ranking

These nested loops maintain player interest over different time frames

#The factors that destroy the flow state are:

1. Improper difficulty/complexity settings: This may be due to Poor game design, or maybe even imbalanced matchmaking due to insufficient player count

2. Unclear goals: game design factors

3. Delayed feedback: game design and technical issues due to

4. Intrusive monetization: game design + product

5. Technical issues/lag

The symbiosis of games and AI

p>

AI agents can help players achieve this flow state. Before discussing how to achieve this goal, let us first understand what kind of agents are suitable for application in the game field:

LLM and reinforcement learning

Agents and NPCs

Game AI is all about speed and scale. When using LLM-driven agents in games, every decision requires an invocation of a massive language model. It's like having a middleman before every step. The middleman is smart, but waiting for his response makes everything slow and painful. Now imagine doing this for hundreds of characters in a game. Not only is it slow, but it's also expensive. This is the main reason why we haven’t seen large-scale LLM agents in games yet. The largest experiment we've seen so far is a 1000-agent civilization developed on Minecraft. If you have 100,000 concurrent agents on different maps, this will be very expensive. Players will also be affected by traffic disruptions as each new agent added causes lag. This destroys the flow state.

Reinforcement learning (RL) is a different approach. We think of it like training a dancer, rather than giving each other step-by-step instructions through a headset. With reinforcement learning, you need to spend time upfront teaching the AI ​​how to “dance” and how to respond to different situations in the game. Once trained, the AI ​​will flow naturally, making decisions in milliseconds without the need for upward requests. You can have hundreds of these trained agents running in your game, each able to make independent decisions based on what they see and hear. They are not as articulate or flexible as LLM agents, but they do things quickly and efficiently.

The real magic of RL comes when you need these agents to work together. Where LLM agents require lengthy “conversations” to coordinate, RL agents can develop an implicit rapport in training—like a football team that has practiced together for months. They learn to anticipate each other's movements and coordinate naturally. While this isn't perfect, sometimes they make mistakes that LLMs don't, but they can operate at a scale that LLMs can't match. For gaming applications, this trade-off always makes sense.

LLM and reinforcement learning

Agents and NPCs

Agents as NPCs will solve the first core problem facing many games today: player mobility. P2E is the first to use cryptographicWe all know the results of the experiment of code economics to solve the problem of player mobility.

Pre-trained agents serve two purposes:

Populate in multiplayer games World

Maintains the difficulty level of a group of players in the world, keeping them in a flow state

While this may seem obvious, it can be difficult to build. Indie games and early Web3 games do not have the financial resources to hire artificial intelligence teams, which provides an opportunity for any agent framework service provider with RL at its core.

Games can work with these service providers during trials and testing to lay the foundation for player mobility when the game is released.

In this way, game developers can focus on game mechanics to make their games more interesting. As much as we love integrating tokens into games, games are games, and games should be fun.

Agent player

The return of the Metaverse?

League of Legends, one of the most played games in the world, has a black market where players train their characters with the best attributes, and the game prohibits They do.

This helps form game characters and properties as the basis for NFTs, creating a market to enable this.

What if a new subset of "players" emerged as coaches for these AI agents? Players can coach these AI agents and monetize them in different forms, such as winning tournaments, and can also serve as “training partners” for esports players or passionate gamers.

LLM and reinforcement learning

The return of the Metaverse?

Early versions of the Metaverse may have simply created an alternate reality rather than an ideal reality, and thus missed the mark. AI agents help metaverse residents create an ideal world-- get away.

In my opinion, this is where LLM-based proxies come in handy. Maybe someone could populate their world with pre-trained agents who are domain experts and can hold conversations about things they like. If I create an agent trained on 1000 hours of Elon Musk interviews and users want to use instances of that agent in their world, then I can get rewarded for that. This creates a new economy.

With metaverse games like Nifty Island, this can become a reality.

In Today: The Game, the team has created an LLM-based AI agent named "Limbo" (which released a speculative token). The vision is a world where multiple agents interact autonomously while we watch live streams 24x7.

How does Crypto integrate with it?

Crypto can help solve these problems in different ways:

Players contribute their own game data , to improve the model, get a better experience, and get rewards for it

Coordinate with multiple stakeholders such as character designers and trainers to create the best game Internal proxy

Create a marketplace for in-game agent ownership and monetize it

There is a team doing these things and more: ARC Agents. They are addressing all the issues mentioned above.

They have the ARC SDK, which allows game developers to create human-like artificial intelligence agents based on game parameters. With a very simple integration, it solves player mobility issues, cleans and turns game data into insights, and helps players stay in the flow by adjusting difficulty levels. To do this, they used reinforcement learning (Reinforcement Learning) technology.

They originally developed a game called AI Arena, where you basically trained your AI character to fight. This helped them form A baseline learning model forms the basis of the ARC SDK. This forms a sort of DePIN-like flywheel:

All this with their ecosystem token $NRN acts as a coordinator. The Chain of Thought team explains this well in their article on ARC proxies:

Games like Bounty are taking an agent-first approach, building agents from scratch in a wild west world.

Conclusion

The fusion of AI agents, game design, and Crypto is not just another technology trend, it also has the potential to solve various problems that plague indie games. The beauty of AI agents in the gaming field. That is, they enhance the fun of the game - good competition, rich interaction, and endless challenges. As frameworks such as ARC agents mature and more games integrate AI. agents, we may very well see entirely new gaming experiences emerge, imagine worlds that are vibrant not because of other players, but because of the agents within them that are able to learn and evolve with the community.

< p style="text-align: left;">We are moving from a "play-to-earn" era to a more exciting era: games that are both truly fun and infinitely scalable. The next few years are going to be exciting for developers, players, and investors in the space. The games of 2020 and beyond will not only be more technologically advanced, but they will also be fundamentally more engaging, engaging, and alive than anything we've seen before.
Keywords: Bitcoin
Share to: