Source: Meteorite Labs
In the past year, AI narrative has been developing rapidly in the Crypto market. Leading VCs such as a16z, Sequoia, Lightspeed, and Polychain have invested tens of millions of dollars with a stroke of their pen. Many high-quality teams with scientific research backgrounds and prestigious school backgrounds have also entered Web3 and are moving towards decentralized AI. Over the next 12 months, we will see these high-quality projects come to fruition.
In October this year, OpenAI raised another $6.6 billion in financing, and the arms race in the AI track has reached an unprecedented height. Retail investors have few opportunities to make money outside of direct investment in Nvidia and hardware, and this enthusiasm is bound to continue to spread to Crypto, especially with the recent wave of dog-catching driven by the AI Meme. It is foreseeable that Crypto x AI, whether it is an existing listed token or a new star project, will still have strong momentum.
As the leading decentralized AI project Hyperbolic has recently received a second investment from Polychain and Lightspeed, we will sort out the Crypto x AI infrastructure projects starting from the 6 projects that have recently received large amounts of financing from leading institutions. Development context, looking forward to how decentralized technology can protect mankind in the future of AI.
Hyperbolic: Recently announced the completion of a US$12 million Series A financing jointly led by Variant and Polychain, with a total financing amount of more than US$20 million, Bankless Ventures, Chapter One, Lightspeed Faction, IOSG, Blockchain Builders Fund, Well-known VCs such as Alumni Ventures and Samsung Next participated in the investment.
PIN AI: Completed a $10 million pre-seed round of financing, with investments from well-known VCs such as a16z CSX, Hack VC, and Blockchain Builders Fund (Stanford Blockchain Accelerator).
Vana: Completed US$18 million in Series A financing and US$5 million in strategic financing, with investment from well-known VCs such as Paradigm, Polychain, and Coinbase.
Sahara: Completed US$43 million in Series A financing, with investments from well-known VCs such as Binance Labs, Pantera Capital, and Polychain.
Aethir: Complete $9 million in 2023 at $150 million valuationPre-A round of financing and completion of node sales of approximately US$120 million in 2024.
IO.NET: Completed US$30 million in Series A financing, with investments from well-known VCs such as Hack VC, Delphi Digital, and Foresight Ventures.
The three elements of AI: data, computing power and algorithmMarx told us in "Das Kapital" that the means of production, productivity and production relations are the key elements in social production. If we make an analogy, we will find that there are also three key elements in the artificial intelligence world.
In the AI era, computing power, data and algorithms are key.
In AI, data is the means of production. For example, you type and chat on your mobile phone every day, take photos and post them to Moments. These texts and pictures are all data. They are like the "ingredients" of AI and are the basis for AI to operate.
This data ranges from structured numerical information to unstructured images, audio, video and text. Without data, AI algorithms cannot learn and optimize. The quality, quantity, coverage, and diversity of data directly affect the performance of the AI model and determine whether it can complete specific tasks efficiently.
In AI, computing power is productivity. Computing power is the underlying computing resource required to execute AI algorithms. The stronger the computing power, the faster and better the data processing speed. The strength of computing power directly determines the efficiency and capabilities of the AI system.
Powerful computing power can not only shorten the training time of the model, but also support more complex model architecture, thus improving the intelligence level of AI. Large language models, like OpenAI’s ChatGPT, take months to train on powerful computing clusters.
In AI, algorithms are production relations. Algorithms are the core of AI. Their design determines how data and computing power work together, and is the key to transforming data into intelligent decisions. With the support of powerful computing power, algorithms can better learn patterns in data and apply them to practical problems.
In this way, data is equivalent to the fuel of AI, computing power is the engine of AI, and algorithms are the soul of AI. AI = data + computing power + algorithm. Any startup that wants to stand out in the AI track must have all three elements, or show a unique leading edge in one of them.
As AI is developing towards multi-modality (models are based on multiple forms of information and can process text, images, audio, etc. simultaneously), the demand for computing power and data will only increase exponentially.
In the era of scarce computing power, Crypto empowers AIThe emergence of ChatGPT not only set off a revolution in artificial intelligence, but also inadvertently pushed computing power and computing hardware to the forefront of technology searches..
After the “Thousand Model War” in 2023, in 2024, as the market’s understanding of AI large models continues to deepen, the global competition around large models is being divided into “capability improvement” and “scenario Development" two roads.
In terms of improving the capabilities of large models, the market’s biggest expectation is the GPT-5 that is rumored to be released by OpenAI this year. It is eagerly anticipated that its large models will be pushed to a truly multi-modal stage.
In terms of large model scenario development, AI giants are promoting the faster integration of large models into industry scenarios to generate application value. For example, attempts in AI Agent, AI search and other fields are constantly deepening the use of large models to improve existing user experience.
Behind these two paths, there is undoubtedly a higher demand for computing power. The improvement of large model capabilities is mainly based on training, which requires the use of huge high-performance computing power in a short time; the application of large model scenarios is mainly based on inference, and the performance requirements for computing power are relatively low, but more emphasis is placed on stability and low latency. hour.
As OpenAI estimated in 2018, since 2012, the computing power requirements for training large models have doubled every 3.5 months, and the computing power required has increased by as much as 10 times every year. At the same time, as large models and applications are increasingly deployed in actual business scenarios of enterprises, the demand for inference computing power is also increasing.
The problem is that demand for high-performance GPUs is growing rapidly around the world, but supply is failing to keep up. Taking NVIDIA's H100 chip as an example, it has experienced a severe supply shortage in 2023, with a supply gap of more than 430,000 units. The upcoming B100 chip, which offers 2.5 times better performance and costs only 25% more, is likely to be in short supply again. This imbalance between supply and demand will cause the cost of computing power to rise again, making it difficult for many small and medium-sized enterprises to afford high computing costs, thereby limiting their development potential in the field of AI.
Large technology companies such as OpenAI, Google and Meta have stronger resource acquisition capabilities and have the money and resources to build their own computing infrastructure. But what about AI startups, let alone ones that haven’t yet raised funding?
Indeed, buying second-hand GPUs on eBay, Amazon and other platforms is also a feasible method. While cost is reduced, there may be performance issues and long-term repair costs. In this era of GPU scarcity, building infrastructure may never be the optimal solution for startups.
Even if there are GPU cloud providers that can be rented on demand, the high price is still a big challenge for them. For example, the price of an Nvidia A100 is about US$80 per day. If It requires 50 cards to be run 25 days a month, and the cost in terms of computing power alone is as high as 80 x 50 x 25=100,000 US dollars.Yuan/month.
This gives the decentralized computing power network based on DePIN an opportunity to take advantage of the situation, and it can be said that it is smooth sailing. As IO.NET, Aethir, and Hyperbolic have done, they shift the computing infrastructure costs of AI startups to the network itself. And it allows anyone around the world to connect an unused GPU at home to it, significantly reducing computing costs.
Aethir: Global GPU sharing network, making computing power inclusiveAethir completed a Pre-A round of financing of US$9 million at a valuation of US$150 million in September 2023, and from March to May this year Completed checker node sales of approximately US$120 million. Aethir made $60 million in revenue by selling Checker Node in just 30 minutes, which shows the market’s recognition and expectations for the project.
The core of Aethir is to build a decentralized GPU network so that everyone has the opportunity to contribute their idle GPU resources and earn profits. It's like turning everyone's computer into a small supercomputer, where everyone shares computing power. The advantage of this is that it can greatly improve GPU utilization and reduce resource waste. It also allows companies or individuals that require a lot of computing power to obtain the required resources at a lower cost
Aethir has created a decentralized DePIN network that acts like a resource pool, incentivizing data centers, game studios, technology companies and gamers from around the world to connect idle GPUs to it. These GPU providers are free to take their GPUs on and off the network and therefore have higher utilization than if they were idle. This enables Aethir to provide consumer-level, professional-level and data center-level GPU resources to computing power demanders at a price that is more than 80% lower than that of Web2 cloud providers.
Aethir’s DePIN architecture ensures the quality and stability of these scattered computing power. The three core parts are:
Container is the computing unit of Aethir, which acts as a cloud server and is responsible for executing and rendering applications. Each task is encapsulated in an independent Container as a relatively isolated environment to run customer tasks, avoiding mutual interference between tasks.
Indexer (indexer) is mainly used to instantly match and schedule available computing resources according to task requirements. At the same time, the dynamic resource adjustment mechanism can dynamically allocate resources to different tasks based on the load of the entire network to achieve the best overall performance.
Checker is responsible for real-time monitoring and evaluation of Container's performance, it can instantly monitor and evaluate the status of the entire network and respond promptly to possible security issues. If you need to respond to security incidents such as network attacks, after detecting abnormal behavior, you can promptly issue warnings and initiate protective measures. Similarly, when network performance bottlenecks occur, Checker can also send prompt reminders so that the problem can be resolved promptly, ensuring service quality and security.
Container, Indexer and Checker effectively collaborate to provide customers with freely customized computing power configuration, safe, stable and relatively low-priced cloud service experience. Aethir is a good commercial-grade solution for areas such as AI and gaming.
In general, Aethir has reshaped the allocation and use of GPU resources through DePIN, making computing power more popular and economical. It has achieved some good results in the fields of AI and gaming, and is constantly expanding its partners and business lines. The potential for future development is unlimited.
IO.NET: A distributed supercomputing network that breaks the computing power bottleneckIO.NET completed a $30 million Series A round of financing in March this year, with investments from well-known VCs such as Hack VC, Delphi Digital, and Foresight Ventures.
Similar to Aethir, create an enterprise-level decentralized computing network to provide AI start-ups with lower prices, easier access, and more flexibility by gathering idle computing resources (GPU, CPU) around the world. Equipped with computing power services.
Different from Aethir, IO.NET uses the Ray framework (IO-SDK) to convert thousands of GPU clusters into a whole to serve machine learning (the Ray framework is also used by OpenAI to train GPT- 3). CPU/GPU memory limitations and sequential processing workflows present significant bottlenecks when training large models on a single device. The Ray framework is used for orchestration and batch processing to achieve parallelization of computing tasks.
To this end, IO.NET adopts a multi-layer architecture:
User interface layer: Provides users with a visual front-end interface, including public website, customer area and GPU supplier area, aiming to Provide an intuitive and friendly user experience.
Security layer: Ensure the integrity and security of the system, integrating network protection, user authentication and activity logging mechanisms.
API layer: As a communication hub for websites, suppliers and internal management, it facilitates data exchange and execution of various operations.
Back-end layer: It forms the core of the system and is responsible for operating tasks such as cluster/GPU management, customer interaction and automatic expansion.
Database layer: Responsible for data storage and management, main storage is responsible for structureized data, while caching is used for temporary data processing.
Task layer: manages asynchronous communication and task execution to ensure the efficiency of data processing and circulation.
Infrastructure layer: Forms the basis of the system, including GPU resource pools, orchestration tools, and execution/ML tasks, equipped with powerful monitoring solutions.
Technically, in order to solve the difficulties faced by distributed computing power, IO.NET launched the layered architecture of its core technology IO-SDK, as well as solved the problem of secure connection and Reverse tunneling technology and mesh VPN architecture for data privacy issues. It is popular in Web3 and is called the next Filecoin, with a bright future.
In general, the core mission of IO.NET is to build the world's largest DePIN infrastructure, pool idle GPU resources around the world, and provide it to AI and machine learning fields that require a lot of computing power. support.
Hyperbolic: Create an “AI rainforest” and realize a prosperous and mutually supportive distributed AI infrastructure ecosystemToday, Hyperbolic once again announced the completion of Series A financing totaling more than US$12 million, co-led by Variant and Polychain Capital. The total financing amount exceeds US$20 million. Bankless Ventures, Chapter One, Lightspeed Faction, IOSG, Blockchain Builders Fund, Alumni Ventures, Samsung Next and other well-known VC institutions participated in the investment. Among them, the leading Silicon Valley venture capital Polychain and LightSpeed Faction increased their investment for the second time after the seed round, which is enough to illustrate Hyperbolic's leading position in the Web3 AI track.
Hyperbolic’s core mission is to make AI available to everyone, affordable for developers and affordable for creators. Hyperbolic aims to build an “AI rainforest” where developers can find the necessary resources to innovate, collaborate, and grow within its ecosystem. Just like a natural rainforest, the ecology is interconnected, vibrant, and renewable, allowing creators to explore without limit.
In the view of the two co-founders Jasper and Yuchen, although AI models can be open source, it is not enough without open computing resources. Currently, many large data centers control GPU resources, which discourages many people who want to use AI. Hyperbolic aims to break this situation. They build DePIN computing infrastructure by integrating idle computing resources around the world, making it easy for everyone to use AI.
Therefore, Hyperbolic introduces the concept of "open AI cloud". Everything from personal computers to large data centers can be connected to Hyperbolic to provide computing power. On this basis, Hyperbolic creates a verifiable, privacy-enhancing AI layer that allows developers to build AI applications with reasoning capabilities, while the required computing power comes directly from the AI cloud.
Similar to Aethir and IO.NET, Hyperbolic's AI cloud has its own unique GPU cluster model, called "Solar System Cluster". As we know, the solar system contains various independent planets such as Mercury and Mars. Hyperbolic’s solar system cluster manages, for example, the Mercury cluster, Mars cluster, and Jupiter cluster. These GPU clusters have a wide range of uses and different sizes, but are independent of each other. Dispatched by the solar system.
Such a model ensures that the GPU cluster meets two characteristics. Compared with Aethir and IO.NET, it is more flexible and maximizes efficiency:
Adjusting the state balance, the GPU cluster will adjust the state according to the needs. Automatically expand or shrink
If a cluster is interrupted, the Solar System cluster will automatically detect and repair it
Performance comparison experiment on large language models (LLM) , Hyperbolic GPU clusters achieve throughputs up to 43 tokens/s, this result not only exceeds the 42 tokens/s of the Together AI team composed of 60 people, but is also significantly higher than the 27 tokens/s of HuggingFace, which has more than 300 team members.
In the comparison experiment of the generation speed of the image generation model, the Hyperbolic GPU cluster also demonstrated that its technical strength should not be underestimated. Also using the SOTA open source image generation model, Hyperbolic leads with a generation speed of 17.6 images/min, which not only exceeds Together AI’s 13.6 images/min, but is also much higher than IO.NET’s 6.5 images/min.
These data strongly prove that Hyperbolic’s GPU cluster model is extremely efficient, and its excellent performance makes it stand out among larger competitors. Combined with the advantage of low price, this makes Hyperbolic very suitable for complex AI applications that require high computing power to support, provide near-real-time response, and ensure that AI models have higher accuracy and efficiency when handling complex tasks.
In addition, from the perspective of encryption innovation, we believe that the most important achievement of Hyperbolic is the development of the verification mechanism PoSP (Proof of Sampling) uses a decentralized approach to solve one of the toughest challenges in the field of AI - verifying whether the output comes from a specified model, so that the inference process can be cost-effectively decentralized.
Based on the PoSP principle, the Hyperbolic team developed the spML mechanism (sampling machine learning) for AI applications to randomly analyze transactions in the network. Sampling, rewarding the honest and punishing the dishonest, to achieve a lightweight verification effect, reducing the computational burden of the network, allowing almost any AI startup company to use their AI services in a distributed verification paradigm Complete decentralization.
The specific implementation process is as follows:
1) The node calculates the function and submits the result to the orchestrator in an encrypted manner.
2) It is then up to the orchestrator to decide whether to trust the result, and if so, the node is rewarded for the calculation.
3) If there is no trust, the orchestrator will randomly select a validator in the network, challenge the node, and calculate the same function. Likewise, the verifier submits the results to the orchestrator in an encrypted manner.
4) Finally, the orchestrator checks whether all results are consistent. If they are consistent, both the node and the verifier will receive rewards; if they are inconsistent, an arbitration procedure will be initiated to trace the calculation process of each result. Honest people are rewarded for their accuracy and dishonest people are punished for cheating the system.
Nodes do not know whether the results they submit will be challenged, nor do they know which validator the orchestrator will choose to challenge, to ensure the fairness of the verification. The costs of cheating far outweigh the potential benefits.
If spML is tested in the future, it will be enough to change the rules of the game for AI applications and make trustless inference verification a reality. In addition, Hyperbolic has the unique ability in the industry to apply the BF16 algorithm in model reasoning (competitors are still stuck at FP8), which can effectively improve the accuracy of reasoning, making Hyperbolic's decentralized reasoning service extremely cost-effective.
In addition, Hyperbolic’s innovation is also reflected in its integration of AI cloud computing power supply and AI applications. The demand for decentralized computing power market itself is relatively scarce. Hyperbolic attracts developers to build AI applications by building verifiable AI infrastructure. The computing power can be directly and seamlessly integrated into AI applications without sacrificing performance and security. After expanding to a certain scale, it can be self-sufficient and achieve a balance between supply and demand.
Developers can build innovative AI applications around computing power, Web2 and Web3 on Hyperbolic, such asFor example:
GPU Exchange, a GPU trading platform built on the GPU network (orchestration layer), commercializes "GPU resources" for free trading, making computing power more cost-effective.
IAO, or tokenizing AI Agents, allows contributors to earn tokens, and the AI Agent’s revenue will be distributed to token holders.
AI-driven DAO, that is, a DAO that uses artificial intelligence to help governance decision-making and financial management.
GPU Restaking allows users to connect their GPU to Hyperbolic and then stake it to AI applications.
Overall, Hyperbolic has established an open AI ecosystem that makes AI easily accessible to everyone. Through technological innovation, Hyperbolic is making AI more popular and accessible, making the future of AI full of interoperability and compatibility, and encouraging collaborative innovation.
Data returns to users, and we join the AI waveToday, data is a gold mine, and personal data is being seized and commercialized for free by technology giants.
Data is the food of AI. Without high-quality data, even the most advanced algorithms cannot do their job. The quantity, quality, and diversity of data directly affect the performance of AI models.
As we mentioned earlier, the industry is looking forward to the launch of GPT-5. However, it has not been released for a long time, perhaps because the amount of data is not yet sufficient. GPT-3 alone requires a data volume of 2 trillion Tokens at the stage of publishing papers. GPT-5 is expected to reach a data volume of 200 trillion Tokens. In addition to the existing text data, more multi-modal data is needed, which can be used for training after being cleaned.
In today’s public Internet data, there are relatively few high-quality data samples. A realistic situation is that large models perform very well in question and answer generation in any field, but when faced with problems in professional fields The performance is poor, and there may even be the illusion that the model is "seriously talking nonsense".
In order to ensure the "freshness" of data, AI giants often reach transaction agreements with owners of large data sources. For example, OpenAI signed a $60 million deal with Reddit.
Recently, some social software has begun to require users to sign agreements, requiring users to agree to authorize content for use in the training of third-party AI models. However, users have not received any rewards from this. This predatory behavior has raised public doubts about the right to use data.
Obviously, the decentralized and traceable potential of blockchain is naturally suitable for improving the dilemma of access to data and resources, while providing more control and transparency for user data, and also through participating in the AI model. Gain benefits from training and optimization. This new dataThe value creation method will greatly increase user participation and promote the overall prosperity of the ecosystem.
Web3 already has some companies targeting AI data, such as:
Data acquisition: Ocean Protocol, Vana, PIN AI, Sahara, etc.
Data processing: Public AI, Lightworks, etc.
Among them, the more interesting ones are Vana, PIN AI, and Saraha. They all have recently received large amounts of financing and have a luxurious lineup of investors. Both projects break out of sub-fields and combine data acquisition with AI development to promote the implementation of AI applications.
Vana: Users control data, DAO and contribution mechanism reshape the AI data economyVana completed a round of financing of US$18 million in December 2022, and completed a US$5 million strategy in September this year Financing. Investments from well-known VCs such as Paradigm, Polychain, and Coinbase.
The core concept of Vana is "user-owned data, realizing user-owned AI". In this era where data is king, Vana wants to break the monopoly of big companies on data and let users control their own data and benefit from their own data.
Vana is a decentralized data network focused on protecting private data, allowing users’ data to be used as flexibly as financial assets. Vana attempts to reshape the landscape of the data economy and transform users from passive data providers to active participating and mutually beneficial ecosystem builders.
To realize this vision, Vana allows users to aggregate and upload data through a data DAO, and then verify the value of the data while protecting privacy through a proof-of-contribution mechanism. This data can be used for AI training, and users are incentivized based on the quality of the data they upload.
In terms of implementation, Vana's technical architecture includes five key components: data liquidity layer, data portability layer, universal connection group, unmanaged data storage and decentralized application layer.
Data Liquidity Layer: This is the core of the Vana network, which incentivizes, aggregates and verifies valuable data through the Data Liquidity Pool (DLP). DLP is like a data version of "liquidity pool". Each DLP is a smart contract designed to aggregate specific types of data assets, such as Reddit, Twitter and other social media data.
Data Portability Layer: This component gives portability to user data, ensuring that users can easily transfer and use their data between different applications and AI models.
Data Ecology Map: This is a map that tracks real-time data flow across the entire ecosystem, ensuring transparency.
Non-managed data storage: Vana’s innovation lies in its unique data management method, allowing users to always maintain complete control over their data. The user's original data will not be uploaded to the chain, but the storage location will be chosen by the user, such as a cloud server or a personal server.
Decentralized application layer: Based on data, Vana has built an open application ecosystem. Developers can use the data accumulated by DLP to build various innovative applications, including AI applications, and data contributors You can get dividend rewards from these applications.
Currently, Vana has built social media platforms such as ChatGPT, Reddit, LinkedIn, and Twitter, as well as DLPs focusing on AI and browsing data. As more DLPs join it, With more innovative applications being built on the platform, Vana has the potential to become the next generation of decentralized AI and data economic infrastructure.
This reminds us of a recent news story. In order to improve the diversity of LLM, Meta is collecting data from UK users of Facebook and Instagram, but it fails to allow users to choose "opt out" instead of "agree". , and suffered from forced criticism. Perhaps it would be a better choice for Meta to build a DLP for Facebook and Instagram respectively on Vana, which not only ensures data privacy, but also encourages more users to actively contribute data.
PIN AI: Decentralized AI assistant, mobile AI connects data and daily lifePIN AI completed a US$10 million pre-seed round of financing in September this year, a16z CSX, Hack VC, Blockchain Builders Fund (Stanford Blockchain Accelerator) and other well-known VCs and angel investors participated in this investment.
PIN AI is an open AI network powered by the distributed data storage network of the DePIN architecture. Users can connect their devices to the network, provide personal data/user preferences, and receive token incentives. The move enables users to regain control and monetize their data. Developers can use the data to build useful AI agents.
The vision is to become a decentralized alternative to Apple Intelligence, committed to providing user groups with applications that are useful in daily life and realizing the intentions proposed by users, such as purchasing goods online, planning travel, and planning investments. Behavior.
PIN AI consists of two types of AI, personal AI assistant and external AI service.
Personal AI AssistantThe manager can access user data, collect user needs, and give external services AI the appropriate data when they need it. The bottom layer of PIN AI is composed of the DePIN distributed data storage network, which provides rich user data for inference of external AI services without access to the user's personal privacy.
With PIN AI, users will no longer need to open thousands of mobile apps to complete different tasks. When a user expresses an intention to a personal AI assistant such as “I want to buy a new piece of clothing,” “What kind of takeout to order,” or “Find the best investment opportunity in my article,” the AI not only understands the user’s preferences, All these tasks can also be performed efficiently - the most relevant applications and service providers will be found to implement user intent in the form of a competitive bidding process.
The most important thing is that PIN AI realizes the necessity of introducing a decentralized service that can provide more value under the current dilemma where users are accustomed to interacting directly with centralized service providers to obtain services. . The personal AI assistant can legitimately obtain high-value data generated when the user interacts with Web2 applications in the name of the user, and store and call it in a decentralized manner, so that the same data can exert greater value, allowing the data owner and The caller also benefits.
Although the PIN AI mainnet has not yet been officially launched, the team has demonstrated the product prototype to users through Telegram to a small extent to facilitate the perception of the vision.
Hi PIN Bot consists of three sections, Play, Data Connectors, and AI Agent.
Play is an AI virtual companion powered by large models such as PIN AI-1.5b, Gemma, Llama and more. This is equivalent to PIN AI’s personal AI assistant.
In Data Connectors, users can connect Google, Facebook, X, and Telegram accounts to earn points to upgrade virtual companions. In the future, it will also support users to connect Amazon, Ebay, Uber and other accounts. This is equivalent to PIN AI’s DePIN data network.
Your own data is for your own use. After connecting the data, the user can put forward a request to the virtual companion (Coming soon), and the virtual companion will provide the user's data to the AI Agent that meets the task requirements for processing.
The official has developed some AI Agent prototypes, which are still in the testing stage. These are equivalent to PIN AI’s external AI services. For example, X Insight, enter a Twitter account, and it can analyze the operation of the account. When Data Connectors support e-commerce, takeout and other platform accounts, things like ShoppingAI Agents such as , Order Food, etc. can also play a role in autonomously processing orders placed by users.
In general, through DePIN+AI, PIN AI has established an open AI network, allowing developers to build truly useful AI applications and allowing users to Life becomes more convenient and smarter. As more developers join, PIN AI will bring more innovative applications and truly integrate AI into daily life.
Sahara: Multi-layer architecture leads AI data rights confirmation, privacy, and fair transactionsSahara completed a US$43 million Series A financing in August this year, with investments from well-known VCs such as Binance Labs, Pantera Capital, and Polychain.
Sahara AI is a multi-layered architecture AI blockchain application platform that focuses on establishing a fairer and more transparent AI development model in the AI era that can attribute value to data and distribute profits to users, solving traditional problems. Pain points such as privacy, security, data access, and transparency in AI systems.
In layman’s terms, Sahara AI wants to build a decentralized AI network that allows users to control their own data and receive rewards based on the quality of the data they contribute. In this way, users are no longer passive data providers, but become ecosystem builders who can participate and share benefits.
Users can upload data to their decentralized data market and then use a special mechanism to prove ownership of the data ("confirmation"). This data can be used to train AI, and users are rewarded based on the quality of the data.
Sahara AI includes a four-layer architecture of application, transaction, data and execution, providing a strong foundation for the development of the AI ecosystem.
Application layer: Provides tools such as secure vaults, decentralized AI data marketplaces, no-code toolkits, and Sahara ID. These tools ensure data privacy and promote fair compensation for users, and further simplify the process of creating and deploying AI applications.
To put it simply, the vault uses advanced encryption technology to ensure the security of AI data; the decentralized AI data market can be used for data collection, annotation and transformation, promoting innovation and fair transactions; no-code tools The package makes the development of AI applications easier; Sahara ID is responsible for managing user reputation and ensuring trust.
Transaction layer: Sahara blockchain uses the Proof of Stake (PoS) consensus mechanism to ensure the efficiency and stability of the network, allowing consensus to be reached even in the presence of malicious nodes. In addition, Sahara’s native pre-compilation function is specially designed to optimize AI processing and can be directly used in the blockchain environment.Perform efficient calculations and improve system performance.
Data layer: manages data on and off the chain. On-chain data handles untraceable operations and attribute records to ensure credibility and transparency; off-chain data handles large data sets and uses Merkle Tree and zero-knowledge proof technology to ensure data integrity and security to prevent data duplication and tamper.
Execution layer: Abstracts the operations of vaults, AI models, and AI applications, supporting various AI training, inference, and service paradigms.
The entire four-layer architecture not only ensures the security and scalability of the system, but also embodies Sahara AI's long-term vision of promoting the collaborative economy and AI development, aiming to revolutionize AI technology. application model to bring more innovative and fair solutions to users.
ConclusionWith the continuous advancement of AI technology and the rise of the encryption market, we are standing on the threshold of a new era.
As large AI models and applications continue to emerge, the demand for computing power is also growing exponentially. However, the scarcity of computing power and rising costs are a huge challenge for many small and medium-sized enterprises. Fortunately, decentralized solutions, especially Hyperbolic, Aethir, and IO.NET, provide AI startups with new ways to obtain computing power, reduce costs, and increase efficiency.
At the same time, we also see the importance of data in the development of AI. Data is not only the food of AI, but also the key to promoting the implementation of AI applications. Projects such as PIN AI and Sahara provide powerful data support for the development of AI by motivating the network and encouraging users to participate in data collection and sharing.
Computing power and data are not just training links. For AI applications, from data ingestion to production inference, each link requires the use of different tools to process massive data, and this is a process that is constantly repeated process.
In this intertwined world of AI and Crypto, we have reason to believe that we will witness the implementation of more innovative AI projects in the future. These projects will not only change our work and lifestyle, but also promote the entire society. Develop in a more intelligent and decentralized direction. As technology continues to advance and the market continues to mature, we look forward to the arrival of a more open, fair, and efficient AI era.