Today, the Grok APP developed by xAI launched a real-time voice mode, supporting 10 modes. Users can communicate with AI through voice and even telephone, further improving the information interaction of Grok series big model Experience.
Just a while ago, on the morning of February 20, Grok 3 announced that it will be open to x users for free. xAI posts that the world's smartest AI Grok3 is now available for free (until our server crashes).
Earlier, Musk brought xAI chief engineer Igro, research engineer Paul, and reasoning engineer Tony to release the latest AI model Grok-3 live on social media X, attracting 7 million viewings.
On the day of the release, xAI said that in math, science and coding benchmarks, Grok-3 defeated OpenAI's GPT-4o, Google's Gemini, DeepSeek's V3 model and Anthropic's Claude.
Musk said that the Grok-3 is an order of magnitude higher than the Grok 2 in a very short time, becoming the "smarest AI on Earth."
This time, xAI also launched a new intelligent search engine DeepSearch through Grok-3, which can intuitively show its understanding of the problem, express its understanding of the process of querying and the method of planning response. Musk particularly emphasized that the model is only a test version so far, and the model will be continuously improved in the future. "You can see an improved version almost every 24 hours."
It is reported that Musk's xAI is negotiating a round of financing, raising about $10 billion and valuing about $75 billion. According to PitchBook data, the company's latest valuation is currently about $51 billion. Meanwhile, Musk's social media X is negotiating to raise funds at a valuation of $44 billion.
X Premium+ and SuperGrok users will now enjoy higher access and get first-hand experience of advanced features such as voice mode.
Grok3 experience address: https://x.com/i/grok
How Musk can create the largest in 122 days Data center cluster?Musk spent more than ten minutes in a 40-minute live broadcast to introduce in detail how he built a data center cluster.
Musk mentioned that when training Gork-2, it cost about 6,500 H100 processors, and they prepared 100,000 GPUs for Gork-3. Although Musk was able to prepare 100,000 GPUs in a very short time, the xAI team still needs to solve a series of problems such as energy and site.
"In 122 days, we were able to get 100,000 GPUs to processThe device is running. I believe this is the largest H100 cluster in its class that has already been put into production. "Igro added. In the next 92 days, xAI invested another 100,000 GPUs and accelerated the launch of Gork-3.
First of all, they urgently needed to find a factory. Because the time for a new factory is certainly not too late , so they prioritized some existing and abandoned factories. Finally, they chose the Musk data center cluster mainly located in Memphis, Tennessee, USA.
Around the factory, they need to solve the data next The energy problem of the center. In order to catch up with the progress, xAI kept promoting the operation of the data center, and even rented a lot of generators and some generators to ensure the operation of the data center until the power system of the entire factory is fully built and connected Entering the public power system.
In the trial operation stage, xAI found that the power in the supercomputer center is very unstable, and the power carried by the GPU cluster will fluctuate very much, which will often cause generator failure. In order to solve the problem of xAI specially borrowed Tesla's team for this problem, and finally chose to use the Megapacks method to smooth the entire power use and form a relatively stable power delivery system.
The current power in the data center is not maintained at On one or two switches, the xAI team redesigned the entire data center cluster. "Specifically, our data center factory is now constructed with a relatively special way. When you walk into our factory, you pull out a few cables, and the data center can still Working normally. This is probably something most data center teams don't pay attention to. "Paul said.
After solving the power, it is necessary to start managing the network. During the data center construction phase, Musk mentioned that the team had solved problems such as mismatch of network connection devices at 4 a.m. /p>
When the data center is operating, in order to solve the cooling problem, Musk talked about renting almost 1/4 of the US mobile cooling equipment to keep the temperature in the factory normal and set up a complete liquid cooling system. System.
Musk quickly built a supercomputer center with 100,000 GPUs within 122 days. At the same time, within 92 days Doubled the data center again. From Musk's story, we can see that building a real data center is not easy, but its powerful resource integration capabilities can achieve this goal.
< p>Musk's xAI can still launch the Gork series models within a few months behind Microsoft, Meta, and OpenAI, and quickly train Gork-3 to catch up with the first echelon of AI big model. It can be seen that the rapid response is Ability, strong resource support, and excellent talent teams are all the advantages of xAI in the competition for AI big models. Try Grok-3 for yourself! Is 9.11 bigger than 9.9?According to live broadcast, Grok-3 is 10 times higher than Grok-2 in all performance and categories.
Although the model is still in the testing phase, Grok-3 scores are higher than OpenAI's GPT-4o, Google's Gemini, DeepSeek's V3 model and Anthropic's Claude in math, science and coding benchmarks. . Andrej Karpathy, co-founder of OpenAI and AI giant, once posted his preliminary feelings about the model on X and wrote that "it feels like it is comparable to the leading level of OpenAI's most powerful model."
In the live broadcast, chief engineer Igro responded to previous speculations from netizens that the chocolate model is the prototype of Grok-3. He won 1,400 points during the blind test and was loved by many users.
In order to experience Grok-3's reasoning ability, xAI asked Grok-3 to make a Mars immigration plan, asking how humans can go from Mars to Earth and return to Mars from Earth and form 3D animations. Grok-3 immediately began to think after receiving the instructions. As the latest model with reasoning capabilities, Grok-3 can also present the thinking process to users.
But this is not a complete display. Musk mentioned that in order to avoid the core logic being "plagiarized", they also blocked the thinking process partly.
At the same time, xAI also gave Grok-3 another new instruction, which is also one of the things that xAI internal members like most to do - creating a very innovative game that requires a collection of Lianlian and Tetris Two ways to play. About ten minutes later, we also saw the related game generation and it worked successfully.
In our actual test, it was found that the mathematical level of Grok-3 fluctuated up and down. For example, when we experienced it on February 20, Grok-3 still couldn't tell which is bigger and smaller in 9.9 and 9.11.
However, if you choose DeepSearch in Grok-3, Grok-3 will select various information from dozens of pages to comprehensively analyze it, and finally give a more comprehensive answer.
Grok-3 is most famous for its thinking ability in mathematics, science and other fields. We chose a Global Olympic question to ask Grok-3. Unfortunately, both the Grok-3 and the DeepSearch versions have the wrong answers.
Will the Gork mockup hit the Turing Award? xAI released AI agent for the first time
Just when chief engineer Igro wanted to introduce Grok-3, Musk was quite leisurely talking about the origin of Grok's name. Gork turned out to be the name of a Martian in a novel "Strange Land". Gork itself also represents a deep understanding of something.thing. It seems that Lao Ma never forgets his Martian dream anywhere.
Research engineer Paul mentioned that just 17 months have passed since the release of Gork-1, but the performance of the Gork series models has caught up with the level of the world's first-tier big model, which is not the same as OpenAI's 4o model. Average.
"We released Gork-0 17 months ago and basically didn't know anything. After 17 months, we finally had a child who finally graduated from high school. Now our child is finally preparing to go to college." Tony said. Musk said that AI will win some big prizes in the future, such as the Turing Award or the Nobel Prize. This may also more truly reflect the current math level of Grok-3, which is similar to that of college entrance examination candidates.
xAI believes that just having the strongest pre-trained model is not enough to build the best AGI. "The best AGI needs to be able to think like humans, be able to self-critical, verify all solutions, and think from first principles," Igro said.
To achieve the goal, xAI puts the pre-trained model and the reinforcement model together to stimulate the model itself to strengthen reasoning ability. At the same time, there is a model called "Big brain" within xAI, which can inspire Grok-3 to have more thinking skills.
Grok-3 currently has two models: Resoning Beta and mini. In the small model mini mode, it can respond faster, and its answer quality can also be at the same level as Resoning Beta.
Gork-3 is the first step xAI takes in the inference model. Although the model is still in the stage of improvement, xAI still catches up with the first echelon of inference model with Gork-3. At the same time, during the live broadcast, xAI listed Agent as the next step in its own big model series and launched a Deep Search product.
This product mainly helps engineers, scientists, and programmers to edit code. "It's kind of like the next-generation search engine, you can ask questions to it," Paul introduced.
This live broadcast finally returned to the user questioning session, and xAI talked about open source issues. Generally speaking, xAI will choose the open source previous generation model when it officially launches the next generation model. xAI also responded during live broadcast that when Gork-3 is officially launched, Gork-2 will also be open sourced.