Author: Haotian
A friend said that the continuous decline of web3 AI Agent targets such as #ai16z and $arc is caused by the recently popular MCP protocol? At first glance, I was a little confused. Is there anything to do with WTF? But after thinking about it carefully, I found that there is really a certain logic: the valuation and pricing logic of web3 AI Agent has changed, and the narrative direction and product implementation route need to be adjusted urgently. Let’s talk about my personal point of view:
1) MCP (Model Context Protocol) is an open source standardized protocol designed to seamlessly connect various AI LLM/Agents to various data sources and tools. It is equivalent to a plug-and-play USB "general" interface, replacing the end-to-end "specific" packaging method in the past.
Simplely put, there were originally obvious data silos between AI applications. To achieve interoperability between Agent/LLM, they needed to develop corresponding calling API interfaces. Not only was the complex operation process, but there was also a lack of two-way interaction function. Usually, there were relatively limited model access and permission restrictions.
The emergence of MCP is equivalent to providing a unified framework that allows AI applications to get rid of the data silos of the past and achieve the possibility of "dynamic" access to external data and tools, which can significantly reduce development complexity and integration efficiency, and in terms of automated task execution, real-time data query, and cross-platform collaboration. Speaking of this, many people immediately thought that if Manus integrates the MCP open source framework that can promote multi-agent collaboration, would it be invincible?
Yes, Manus + MCP is the key to the impact of the web3 AI Agent this time.
2) However, it is incredible that both Manus and MCP are frameworks and protocol standards for web2 LLM/Agent, which solve the problems of data interaction and collaboration between centralized servers. Their permissions and access control also rely on the "active" opening of each server node. In other words, it is just an open source tool attribute.
In theory, it is completely contrary to the central ideas such as "distributed server, distributed collaboration, distributed incentives" pursued by web3 AI Agent.How can a centralized Italian cannon blow up a decentralized bunker?
The reason is that the first stage of web3 AI Agent is too "web2". On the one hand, it is because many teams come from the web2 background and lack a full understanding of the native needs of web3 Native. For example, the ElizaOS framework was originally a package framework to help developers quickly deploy AI Agent applications, which is precisely integrating platforms such as Twitter, Discord and some API interfaces such as OpenAI, Claude, and DeepSeek, and appropriately encapsulate some general frameworks of Memory and Charater to help developers quickly develop and implement AI Agent applications. But if you are serious, what is the difference between this service framework and the open source tools of web2? What are the differentiation advantages?
Uh, is the advantage that there is a set of Tokenomics incentive methods? Then use a framework that can be completely replaced by web2 to inspire a group of more AI agents that exist to issue new coins? horrible. . Following this logic, you will probably understand why Manus +MCP can have an impact on the web3 AI Agent? Since the web3 AI Agent frameworks and services only solve the fast development and application needs similar to web2 AI Agent, but cannot keep up with the innovation speed of web2 in terms of technical services, standards and differentiation advantages, the market/capital revalued and priced the previous batch of web3 AI Agents.
3) Speaking of this, the general problem must have found the crux of the problem, but how to break the deadlock? Just one way: focus on making web3 native solutions, because the operation and incentive architecture of distributed systems are the advantages of absolute differentiation of web3?
Taking distributed cloud computing power, data, algorithms and other service platforms as examples, on the surface, the computing power and data aggregated based on idle resources cannot meet the needs of engineering innovation in the short term. However, when a large number of AI LLMs are competing for concentrated computing power to engage in a performance breakthrough arms race, a service model with "idle resources, low cost" as a gimmick will naturally make web2 developers and VC groups disdain.
But when the web2 AI Agent passes the stage of performance innovation, it will inevitably pursue directions such as vertical application scenario expansion and segmentation and fine-tuning model optimization, and then it will truly show the advantages of web3 AI resource services. thingIn fact, when web2 AI, which climbs into the giant position through resource monopoly, it is difficult to retreat and use the idea of surrounding cities in the countryside and break down the scene one by one. At that time, it is time for excess web2 AI developers + web3 AI resources to work together to make efforts.
So, the opportunity space for web3 AI Agent is also very clear: before the web3 AI resource platform has overflowing web2 developers and customers, explore and implement a set of feasible solutions and paths that are not impossible with web3 distributed architecture. In fact, in addition to the quick deployment of web2 + multi-agent collaborative communication framework + Tokenomic coin issuance narrative, web3 AI Agent has many innovative directions worth exploring:
For example, it is equipped with a distributed consensus collaboration framework, considering the characteristics of off-chain computing + on-chain state storage of LLM large model, many adaptable components are needed.
1. A decentralized DID authentication system allows the agent to have a verifiable on-chain identity. This is like the unique address generated by the execution virtual machine for smart contracts, mainly for the continuous tracking and recording of subsequent states;
2. A decentralized Oracle oracle system is mainly responsible for the trusted acquisition and verification of off-chain data. Unlike previous Oracle, this oracle that is adapted to AI Agent may also need to do a combination architecture including data acquisition layer, decision consensus layer, and execution feedback layer, so that the data required on-chain and off-chain calculations and decisions of the Agent can be reached in real time;
3. A decentralized storage DA system. Due to the uncertainty of the knowledge base status of the AI Agent runtime, and the reasoning process is relatively temporary, a set of key state libraries and inference paths behind LLM are recorded and stored in a distributed storage system, and a cost-controllable data proof mechanism is provided to ensure data availability during public chain verification;
4. A set of zero-knowledge proof ZKP privacy computing layer can link privacy computing solutions including TEE time, FHE, etc. to realize real-time privacy computing + data proof verification, allowing the Agent to have a wider vertical data source (medical, financial), and then there are more professional customized service agents on top;
5. A set of cross-chain interoperability protocols is somewhat similar to the framework defined by the MCP open source protocol. The difference is that this Interoperability solution requires a relay and communication scheduling mechanism that adapts to the operation, delivery, and verification of the Agent, which can complete the asset transfer and state synchronization problems between different chains, especially including the Agent context and complex states such as Promopt, knowledge base, and Memory;
…
In my opinion, the focus of the real web3 AI Agent should be on how to make AI How can Agent's "complex workflow" and blockchain's "trust verification flow" fit as much as possible. As for these incremental solutions, it is possible to be derived from the upgrade and iteration of existing old narrative projects or re-cast projects on the newly formed AI Agent narrative track.
This is the direction that web3 AI Agent should strive to build, which is the fundamentals of the innovation ecosystem under the AI + Crypto macro narrative. If there is no relevant innovation development and differentiated competition barriers, then every turbulent movement in the web2 AI track may cause web3 AI to turn upside down.