News center > News > Headlines > Context
OpenAI successfully created a crisis for itself
Editor
2024-12-24 11:03 5,693

OpenAI successfully created a crisis for itself

Image source: Generated by Unbounded AI

OpenAI’s calculations were miscalculated.

OpenAI split a press conference that could have been completed in 2 hours into 12 days, with talks lasting about 15 minutes each day. This carefully planned marketing feast creates a sense of urgency of "fear of missing out".

However, it was thought to be able to whet the appetite of the audience, but it became a target for its opponents. In the past 12 days, Google has never released the progress of its own AI products so actively and intensively. Every time it is a fierce attack on OpenAI.

During this period, Google first released its own large language model Gemini 2.0. In foreign media evaluations, it has obvious advantages over openai's GPT-o1 in terms of multi-modal capabilities and processing speed.

Next, Google released Veo 2, a video generation model that benchmarked Sora, which was considered to be better than Sora in many aspects during actual testing. For example, Voe 2 is stronger in generating video realism and details, while Sora is relatively weak in these aspects and is more prone to unnatural movements and objects; for example, Voe 2 allows users to specify through simple prompts More lens types and shooting methods make the video generation process more flexible.

Even on the 12th day, OpenAI swept away the dullness of the previous 11 days and released a new generation model GPT-o3, which is close to AGI (General Artificial Intelligence) in all aspects of performance, but users do not buy it. . People believe that GPT-o3 is still a "futures", just like Sora in the past, and cannot be opened to users in the short term.

Earlier, according to a review released by the non-profit organization METR in November, Anthropic's large model Claude Sonnet 3.5 exceeded OpenAI's o1-preview in 5 out of 7 tests. .

It cannot be said that OpenAI is no longer strong. It has released two top models GPT-o1 and GPT-o3 within 3 months, but it is being approached step by step by its opponents.

OpenAI, which once led the industry by leaps and bounds, has changed from "god" to "excellent" in 2 years. It has gradually stood on the same starting line as more and more opponents, and has even begun to be... Beyond.

The 12-day press conference went from being highly anticipated to having a negative reputation.

Outside of the press conference, the situation of OpenAI is not optimistic.

In order to increase revenue as soon as possible, OpenAI Chief Commercial Officer Lionetti said that the company has been trying to reach cooperation with customers in industries such as healthcare, manufacturing and legal companies, and education. For example, OpenAI released ChatGPT EDU for campuses in May this year, and also recruited former executives from the American online education giant Coursera to serve as its instructors.Education business manager.

Unfortunately, things went counterproductive. OpenAI's market share in enterprise AI has dropped from 50% to 34% this year, while Amazon-backed Anthropic's market share has doubled from 12% to 24%, according to data from venture capital firm Menlo Ventures.

OpenAI went from being highly anticipated, to being disappointed, to rapidly losing market share. How did the dramatic turnaround happen?

01. There are no secrets

Just in OpenAI for 12 days During the long press conference, according to the information, two more core figures resigned from OpenAI.

One of them is Alec Radford, a veteran member of OpenAI. He joined in 2016 and has participated in the development of GPT-1 to GPT-4o. Emad Mostaque, the founder of the open source Vincent graph model Stable Diffusion, once said of Alec Radford: If he leaves, openai will collapse.

With his departure, all the core paper authors of the GPT-1 and GPT-2 generations of models have left OpenAI.

It can be said that 2024 has become an important turning year in the development history of this company.

Prior to this, ChatGPT was released on December 1, 2022. The number of users exceeded 100 million in 2 months. The global Internet industry was boiling, and a consensus was quickly reached: This is the clarion call of a new era. As a result, technology giants and startups collectively pursued openai. As of the first quarter of 2024, the number of global artificial intelligence large models has reached 1,328. Among them, the United States accounts for the highest proportion, about 44%, and China accounts for 36%.

On March 14, 2023, OpenAI released GPT-4, which once again amazed the world. Each of its update iterations is refreshing the boundaries of people’s understanding of AI. It is a beacon for the AI ​​industry. .

By 2024, everything seems to have changed. People who once created the above halo have left OpenAI intensively this year, including co-founders Ilya Sutskever and John Schulman, chief technology officer Mira Murati, GPT creator Alec Radford, as well as security and product leaders, and those who just joined 7 OpenAI search director Shivakumar Venkataraman left just a few months ago.

Very few of them started their own businesses, and most of them joined OpenAI’s competitors., such as Google, Amazon, anthropopic, xAI, META, Microsoft, etc.

Their paths to optimizing large models are almost always based on more GPUs, data and top talent.

Naveen Rao, vice president of AI at Databricks, who is closely involved in the recruitment of top talents in Silicon Valley, said in a recent interview that with top talents frequently moving between different organizations, there are almost no trade secrets anymore.

He is a serial entrepreneur who took charge of Databricks’ AI products after selling his previous company, MosaicML, to Databricks in 2023 for $1.3 billion. Data analysis company Databricks just completed the largest financing in Silicon Valley history this month, US$10 billion.

In his opinion, the number of researchers in the world who can truly build new cutting-edge large-scale models is less than 1,000, which is why the competition for these top talents is so fierce. There is rarely an industry where top talents can change jobs so frequently. The situation of "supply exceeds demand" gives them greater freedom and power.

Naveen Rao said that in the large model track, researchers have unprecedented influence in organizations. A researcher’s idea can completely change a product and have a huge impact on a company’s future.

This is the best stage for the technology industry to display personal heroism.

The exodus of a large number of OpenAI’s core personnel undoubtedly brought the secrets and influence of its rise to its opponents, while also dismantling OpenAI’s business barriers.

02. OpenAI’s next card, GPT-5, is still pessimistic

On the last day of OpenAI's 12-day press conference, which was supposed to be a "big day", the Wall Street Journal revealed a piece of blockbuster news: the development progress of GPT-5. blocked.

It is reported that since the release of GPT-4 in March 2023, OpenAI has been fully promoting the research and development of GPT-5. The internal code name of the GPT-5 project is Orion, and it has been in preparation for 18 months.

As the largest investor in OpenAI, Microsoft originally hoped to see the launch of the new model by the middle of this year, but this goal has not been achieved.

OpenAI has conducted at least two rounds of large-scale training. Each round of training requires several months to process massive amounts of data, and each training will encounter new technical problems. According to people familiar with the matter, the computational cost of a six-month training run alone may have reached about $500 million. Now, is it worth investing in?Investing such a huge amount of money is already shocking.

In addition, the data required for large model training generally comes from various sources such as books, academic publications, news articles, and social media posts. Experience shows that the more data you feed a large model, the better its intelligence becomes. However, researchers involved in the Orion project say that existing Internet data is no longer sufficient and they need more diverse and high-quality data.

This view is consistent with the view of OpenAI’s departed co-founder Ilya Sutskever, who once pointed out that the data on the Internet is close to drying up.

Informed sources revealed that OpenAI has even begun to explore so-called "synthetic data" - data generated by artificial intelligence to train Orion. However, research shows that data generated by AI can often cause systems to malfunction or produce meaningless answers.

In order to train large models, the industry urgently needs new solutions, but there is still no consensus in the industry on which method is the best choice. As Ilya said, "In the past ten years of AI predictions, I have almost always been right, but now, I have no idea what will happen next."

Despite this, some people in the industry believe that , although the capability development of large models may face bottlenecks, we do not need to be overly anxious about this. Today's large models have far exceeded the AI ​​technologies of the past, and they can still significantly change business operations and even reshape the competitive landscape of the entire industry.

Some AI investors believe that the spread of AI may be declining, but there are new developments in the AI ​​field every month, and large models are gradually being applied to different scenarios and industries. These may not be so easy. appear in public view.

It can be said that now is the golden age of the artificial intelligence industry, but OpenAI is indeed facing more challenges and crises than before.

Keywords: Bitcoin
Share to: