Author: Archetype Source: ;">The transparency and composability of blockchain make it an ideal substrate for interaction between agents. In this scenario, agents developed by different entities for different purposes can interact with each other seamlessly. At present, many experimental applications have appeared between agents, such as fund transfer between agents, joint launch of tokens, etc. We look forward to seeing how inter-agent interactions can be further expanded, including creating entirely new application areas (e.g., new social scenarios driven by agent interactions), as well as improving currently cumbersome enterprise workflows, such as platform authentication and verification, micropayments , cross-platform workflow integration, etc.
— Danny, Katie, Aadharsh, Dmitriy
2. Decentralization The organization of agentsThe coordination of large-scale multi-agent systems is another exciting area of research. How do multi-agent systems collaborate to complete tasks, solve problems, and govern systems and protocols? In the article "The Promise and Challenges of Crypto + AI Applications" in early 2024, Vitalik proposed using AI agents for prediction markets and arbitration. He believes that multi-agent systems have extraordinary "truth" discovery capabilities and autonomous governance potential when operating at scale. We look forward to seeing how the potential in the form of “multi-agent systems” and “swarm intelligence” can be further explored and experimented with.
As an extension of inter-agent coordination, agent-human coordination also provides interesting space for design—especially how communities interact around agents. , or how agents organize humans for collective action. We look forward to more experiments, especially with agents whose objective functions involve large-scale human coordination. This will require some kind of verification mechanism, especially if the human work is done off-chain, but it has the potential to produce some strange and interesting emergent behavior.
— Katie, Dmitriy, Ash
3. Intelligent multimedia entertainmentDigital The concept of personas has been around for decades. For example, Hatsune Miku (2007) had a sold-out 20,000-seat concert, and virtual internet celebrity Lil Miquela (2016) has over 2 million followers on Instagram. Newer examples include AI virtual streamer Neuro-sama (2022), who has amassed more than 600,000 subscribers on Twitch, and anonymous K-pop virtual boy band PLAVE (2023), who have established themselves in less than two years has accumulated over 300 million views on YouTube.
As AI infrastructure advances and blockchain is integrated in payments, value transfer, and open data platforms, we look forward to seeing how these agents change. become more independent and may even unlock a new mainstream entertainment category in 2025.
— Katie, Dmitriy
4. Generative/Agent Content MarketingIn the previous category, the agent itself was the product, but here, the agent can supplement the product. In the attention economy, consistently producing engaging content is critical to the success of any idea, product or company. Generative/agent content is a powerful tool that teams can use to build scalable, 24/7 content production pipelines. Developments in this area have been driven by discussions around what differentiates memecoins from agents. Even if the current meme currency is not yet “intelligent” in the strict sense, smart agents have become an important tool for it to obtain distribution channels.
Another example is that games often need to become more dynamic if they are to maintain user engagement. A classic way to create dynamics in games is to cultivate user-generated content; purely generative content (including in-game items, NPCs, and even fully generated game levels) is likely to be the next stage in this evolution. We are curious to what extent the boundaries of traditional distribution strategies can be expanded through the capabilities of agents in 2025.
— Katie
5. Next generation art tools/platformsIn 2024, we launched IN CONVERSATION WITH, a series of interviews with crypto artists in music, visual arts, design, curation and more. This year’s interviews allowed me to observe a key point: artists who are interested in crypto-technology also tend to have a broad interest in cutting-edge technologies, and tend to make these technologies the core or aesthetic focus of their creative practice, such as AR/VR objects, The art of code and live coding and more.
Generative art has long had a natural synergy with blockchain, which also makes it clearer as a potential carrier for AI art. It is extremely difficult to display and present these artistic media on traditional art display platforms. ArtBlocks provides a window into how blockchain can be leveraged in the future to present, store, monetize, and protect digital artworks—while improving the overall experience for artists and audiences.
Beyond presentation, AI tools have even expanded the ability of ordinary people to create art. We look forward to seeing how blockchain can further expand or support these tools in 2025 to empower art creators and enthusiasts.
— Katie
6. Data MarketSince Clive Humby proposed "data" 20 years ago "Is the New Oil", major companies have taken strong measures to monopolize and monetize user data. Today, users realize that their data is the foundation upon which these multi-billion dollar companies are built, but they have very little control over it and almost no share of the profits it generates. As the development of powerful AI models accelerates, this tension becomes increasingly critical. If part of the opportunity in the data market is to reduce user data exploitation, the other part is to solve the problem of data supply shortage, because increasingly powerful AI models are gradually depleting the easily accessible data resources on the Internet, and new data sources are urgently needed. .
As for how to use decentralized infrastructure to return control of data to users, the design space is very broad and there is an urgent need to find innovation in multiple fields. solution. Some of the most pressing challenges include:
• Where data is stored and privacy protected (during storage, transmission and computation)
•How to objectively assess, filter and measure data quality
•Mechanisms for data attribution and monetization (especially Tracing the value back to the source after inferring the result)
•How to organize or retrieve data in a diverse ecosystem of models.
When it comes to solving data supply bottlenecks, the key is not just to replicate existing data annotation platforms (such as Scale AI) with tokens, but to understand how we canTechnical advantages create competitive solutions that are superior in scale, quality, and incentive mechanisms to produce high-value data products. Especially in the context that the demand side mainly comes from Web2 AI, thinking about how to combine the smart contract execution mechanism with traditional service level agreements (SLAs) and tools is an important area worthy of attention.
— Danny
7. Decentralized computing powerIf data is one basic building block of AI development and deployment, then computing power is another. The trajectory of deep learning and AI has been largely defined over the past few years by old paradigms dominated by large data centers, such as exclusive access to specific sites, energy sources, and hardware. However, this dynamic is being challenged as physical limitations emerge and open source technologies evolve.
The v1 version of decentralized AI computing looks like a replica of the Web2 GPU cloud, with no real advantage in supply (hardware or data centers) , and also lacks natural market demand. In the v2 version, some teams are developing technology stacks to build competitiveness through capabilities such as orchestration, routing, and pricing of **heterogeneous high-performance computing resources (HPC)**, and introducing proprietary functions to attract demand and compete with Margin compression, especially on inference tasks. In addition, teams began to compete differentiated around different application scenarios and market entry strategies (GTM). Some teams focused on using the compiler framework to improve the efficiency of inference routing across diverse hardware, while other teams focused on the computing power network they built. Create a distributed model training framework.
We have even begun to see the prototype of an AI-Fi market, proposing new economic primitives to transform computing power and GPUs into income-producing assets. Or leverage on-chain liquidity to provide data centers with an alternative source of funding to acquire hardware. A key question is, to what extent will decentralized AI (DeAI) rely on decentralized computing power for development and deployment? Or will it, like the storage market, never close the gap between ideals and actual needs, ultimately failing to realize the full potential of the idea?
— Danny
8. Computing power accounting standardsand incentives for decentralized high-performance computing Related to power networks, a major challenge facing the coordination of heterogeneous computing power is the lack of a recognized set of computing power accounting standards. The unique output characteristics of AI models bring complexity to the high-performance computing market, such as different model variants, quantization techniques, and tunable randomness via temperature and sampling hyperparameters. In addition, different AI hardware (such as GPU architecture and CUDA version) will further cause output differences. Ultimately, this requires setting standards for how models and computing power markets account for their capabilities in heterogeneous distributed systems.
Due to the lack of standards, this year we have seen many cases in both the Web2 and Web3 fields where models and computing power markets are unable to accurately account for the quality and quantity of their computing power. . This results in users having to run model benchmarks themselves, conduct audits by comparing performance results, and even verify real performance by limiting the workload (Proof-of-Work) in the computing power market.
Given the core principle of "verifiability" in the field of encryption, we hope that in 2025, the combination of encryption and AI will be more verifiable than traditional AI. Has advantages. Specifically, average users should be able to compare the output of a model or computing cluster equally to audit and benchmark system performance.
— Aadharsh
9. Probabilistic Privacy PrimitivesIn "The Promise of Encryption + AI Applications" and Challenges", Vitalik raised a unique challenge faced by the integration of encryption and AI:
"In cryptography, open source is the only way to achieve security, but in AI, open source models (or even training data) will greatly increase the risk of adversarial machine learning attacks."
Although privacy is not a new research field in blockchain, the rapid development of AI will further accelerate the research and application of privacy-enhancing technologies. This year, we have made significant progress in privacy technologies such as Zero-Knowledge Proofs (ZK), Fully Homomorphic Encryption (FHE), Trusted Execution Environments (TEE), and Multi-Party Secure Computation (MPC), which can be used to encrypt data Private shared computing on the Internet, suitable for general application scenarios. At the same time, we have also seen that centralized AI giants such as NVIDIA and Apple are using proprietary TEE technology to implement federated learning and private AI inference in systems with consistent hardware, firmware, and models.
In view of this, we will pay close attention to developments in the following areas: how to maintain privacy during random state transitions, and how these advances can accelerate decentralized AI applications in heterogeneous areas. The implementation in the architectural system includes decentralized private reasoning, encrypted data storage and access pipelines, and a completely autonomous execution environment.
— Aadharsh
10. Agent intentions and next-generation user transaction interfaceThe application of AI agents for autonomous transactions on the chain is currently one of the most realistic potential use cases. However, in the past 12-16 months, there have been many vague definitions surrounding concepts such as "intention", "agent behavior", "agent intention", "solver" and "agent solver", especially in relation to the traditional " How to distinguish the development of "trading robot"**.
Over the next 12 months, we expect to see more advanced language systems combined with different data types and neural network architectures to drive the overall design space progress.
• Will the agent use the current on-chain system for transactions, or will it develop its own tools/methods?
Will large language models (LLMs) continue to be the backend of these agent trading systems, or will completely different systems emerge?
•At the user interface level, will users start using natural language to conduct transactions?
• Will the long-held assumption that "the wallet is the browser" finally come true?
These issues will be the focus of our attention.
— Danny, Katie, Aadharsh, Dmitriy