News center > News > Headlines > Context
The Age of Intelligence: The Confrontation and Symbiosis between AI and Crypto
Editor
2024-11-27 23:03:01 9,066

The Age of Intelligence: The Confrontation and Symbiosis between AI and Crypto

Author: Zeke, YBB Capital Researcher

1. The love of the new and the dislike of the old begins with attention

In the past year Here, due to the interruption of the application layer narrative and the inability to match the speed of infrastructure explosion, the encryption field has gradually become a game for attention resources. From Silly Dragon to Goat, from Pump.fun to Clanker, the battle for attention has intensified. Starting from the most clichéd eye-catching realization, it quickly changed to a platform model in which attention demanders and suppliers are unified, and then silicon-based organisms became new content providers. Among the various strange carriers of Meme Coin, there is finally an existence that can allow retail investors and VCs to reach a consensus: AI Agent.

Attention is ultimately a zero-sum game, but speculation can indeed make things happen Growing wildly. In our article about UNI, we reviewed the beginning of the last golden era of blockchain. The reason for the rapid growth of DeFi originated from the LP mining era started by Compound Finance, which was carried out in various mining pools with thousands or even tens of thousands of Apy. In and out was the most primitive way of gaming on the chain during that period, although the final situation was that various mining pools collapsed and were covered in feathers. However, the crazy influx of gold miners has indeed left unprecedented liquidity in the blockchain. DeFi has finally broken away from pure speculation and formed a mature track, satisfying users in all aspects such as payment, trading, arbitrage, and staking. financial needs. AI Agent is also going through this barbaric stage at this stage. What we are exploring is how Crypto can better integrate AI and ultimately push the application layer to new heights.

2. How does an agent become autonomous?

In the previous article, we briefly introduced the origin of AI Meme: Truth Terminal, and the outlook for the future of AI Agent. This article focuses first on the AI ​​Agent itself.

Let's start with the definition of AI Agent. Agent is an older but unclearly defined term in the field of AI. Its main emphasis is Autonomous ( Autonomy), that is, any AI that can perceive the environment and make reflections can be called an agent. In today's definition, AI Agent is closer to an intelligent agent, that is, setting up a system for large models to imitate human decision-making. In academia, this system is regarded as the most promising way to achieve AGI (artificial general intelligence).

In the early GPT version, we could obviously perceive that the large model was very human-like, but when answering many complex questions, the large model could only give Some plausible answers. The essential reason is that the large model at that time was based on probability rather than causality. Secondly, it lacked the ability of humans to use tools, memory, planning, etc., and AI Agent can make up for these deficiencies. So to sum up with a formula, AI Agent (intelligent agent) = LLM (large model) + Planning (planning) + Memory (memory) + Tools (tools).

The large model based on prompt words (Prompt) is more like a static person. It only comes to life when we input it, and the target of the intelligent agent is a more real person. The current intelligent agents in the circle are mainly fine-tuned models based on Meta's open source Llama 70b or 405b version (the two have different parameters). They have the ability to remember and use API access tools, but may require human help or input in other aspects. (Including interaction and collaboration with other agents), so we can see that the main agents in the circle today still exist on social networks in the form of KOLs. To make an intelligent agent more human-like, it needs to have access to planning and action capabilities, and the sub-thinking chain in planning is particularly critical.

3. Chain of Thought (CoT)

The concept of Chain of Thought (CoT) first appeared in the paper "Chain" published by Google in 2022 -of-Thought Prompting Elicits Reasoning in Large Language Models", the paper points out that the model's reasoning ability can be enhanced by generating a series of intermediate reasoning steps, helping the model better understand and solve complex problems.

A typical CoT Prompt contains three parts: clear instructions and logical basis for task description Theoretical basis or principle example that supports task solving. Specific solution demonstration. This structured way helps the model understand the task requirements and gradually approach the answer through logical reasoning, thus improving the efficiency and accuracy of problem solving.sex. CoT is particularly suitable for tasks that require in-depth analysis and multi-step reasoning, such as mathematical problem solving, project report writing and other simple tasks. CoT may not bring obvious advantages, but for complex tasks, it can significantly improve the performance of the model, through step-by-step Solving strategies reduce error rates and improve the quality of task completion.

When building AI Agent, CoT plays a key role. AI Agent needs to understand the received information and make reasonable decisions based on it. CoT provides orderly This way of thinking helps the Agent effectively process and analyze input information, and convert the analysis results into specific action guidelines. This method not only enhances the reliability and efficiency of the Agent's decision-making, but also improves the transparency of the decision-making process and makes the Agent's behavior more predictable. and traceable CoT By decomposing tasks into multiple small steps, it helps the Agent carefully consider each decision point and reduce erroneous decisions caused by information overload. CoT makes the Agent's decision-making process more transparent and makes it easier for users to understand the basis for the Agent's decision-making. In interacting with the environment, CoT allows the Agent to continuously learn new information and adjust behavioral strategies.

As an effective strategy, CoT not only improves the reasoning capabilities of large language models, but also plays an important role in building more intelligent and reliable AI Agents. . By leveraging CoT, researchers and developers can create intelligent systems that are more adaptable to complex environments and have a high degree of autonomy. CoT has demonstrated its unique advantages in practical applications, especially when dealing with complex tasks. By decomposing the task into a series of small steps, it not only improves the accuracy of task solving, but also enhances the interpretability and controllability of the model. sex. This step-by-step approach to problem solving can greatly reduce erroneous decisions caused by too much or too complex information when faced with complex tasks. At the same time, this approach also improves the traceability and verifiability of the entire solution.

The core function of CoT is to combine planning, action and observation to bridge the gap between reasoning and action. This thinking mode allows the AI ​​Agent to formulate effective countermeasures when predicting abnormal situations it may encounter, as well as accumulate new information while interacting with the external environment, verify preset predictions, and provide new reasoning basis. CoT is like a powerful accuracy and stability engine that helps AI Agents maintain efficient work efficiency in complex environments.

4. Correct pseudo-requirements

What aspects of the AI ​​technology stack should Crypto be integrated with? Last year's articlesI believe that the decentralization of computing power and data is a key step to help small businesses and individual developers save costs. In the Crypto x AI segmented track compiled by Coinbase this year, we saw a more detailed division: p>

(1) Computing layer (referring to the network focused on providing graphics processing unit (GPU) resources to AI developers);

(2) Data layer (referring to support AI data pipeline decentralized access, orchestration and verification network);

(3) Middleware layer (referring to supporting the development and deployment of AI models or agents and hosted platform or network);

(4) Application layer (referring to user-oriented products that utilize on-chain AI mechanisms, whether B2B or B2C).

In these four division layers, each layer has a grand vision. In summary, its goals are to fight against the next wave of Silicon Valley giants dominating the Internet. era. As I said last year, do we really have to accept the exclusive control of computing power and data by Silicon Valley giants? The closed-source large model under their monopoly is a black box inside. Science is what humans believe in most today. In the future, every sentence answered by the large model will be regarded as truth by a large number of people, but what about this truth? verify? According to the vision of Silicon Valley giants, intelligent agents will eventually have permissions beyond imagination, such as the right to pay in your wallet and the right to use terminals. How to ensure that people have no evil intentions?

Decentralization is the only answer, but sometimes do we need to reasonably consider comprehensively, how many buyers are there for these grand visions? In the past, we could use Token to make up for the errors caused by idealization without considering commercial closed loops. Today's situation is very serious, and Crypto x AI needs to be designed based on the actual situation. For example, how to balance the two ends of the supply when the computing power layer suffers from performance loss and instability? To match the competitiveness of centralized cloud. How many real users will there be in the data layer project? How to verify the real validity of the data provided? What kind of customers need this data? The same applies to the other two levels. In this era, we don’t need so many seemingly correct pseudo-needs.

5. Meme ran out of SocialFi

As I said in the first paragraph, Meme has used a super-fast way to walk out of the SocialFi form that is consistent with Web3. Friend.tech was the first Dapp to launch this round of social applications, but it was defeated by the eager Token design. Pump.fun has verified the feasibility of a pure platform without any tokens or rules. The demanders and suppliers of attention are unified. You can post memes, do live broadcasts, issue coins, leave messages, and trade on the platform. Everything is free. Pump.fun only charges a service fee. This is basically consistent with the attention economy model of social media such as YouTube and Instagram today, except that the charging objects are different, and the gameplay of Pupm.fun is more Web3.

Base's Clanker is a master of all, benefiting from the integration of Ecology's own hands Ecosystem, Base has its own social Dapp as an auxiliary, forming a complete internal closed loop. Intelligent Meme is the 2.0 form of Meme Coin. People are always looking for new ideas, and Pump.fun is now at the forefront. Judging from the trend, it is only a matter of time before the random ideas of silicon-based organisms replace the vulgar memes of carbon-based organisms.

I have mentioned Base for the umpteenth time, but the content mentioned each time is different. From the timeline, Base has never been the first mover, but it is Always a winner.

6. What else can an intelligent agent be?

From a pragmatic point of view, it is impossible for agents to be decentralized for a long time in the future. Judging from the construction of agents in the traditional AI field, it is not A simple reasoning process is a problem that can be solved by decentralization and open source. It requires access to various APIs to access Web2 content. Its running cost is very expensive. The design of the thinking chain and the collaboration of multi-agent usually still rely on a Humans as medium. We will go through a long transition period until a suitable integration form emerges, perhaps like UNI. But like the last article, I still think that intelligent agents will have a great impact on our industry, just like the existence of Cex in our industry, which is incorrect but important.

The article "AI Agent Overview" issued by Stanford & Microsoft last month extensively describes the application of intelligent agents in the medical industry, intelligent machines, and the virtual world. In the appendix of this article, there are many test cases in which GPT-4V has been used as an agent to participate in the development of top 3A games.

There is no need to force it to be combined with decentralizationThe speed, I hope that the first puzzle that the agent can complete is the ability and speed from the bottom up. We have so many narrative ruins and blank metaverses that need to be filled. At the appropriate stage, we will consider how to make it. Be the next UNI.

Keywords: Bitcoin
Share to: