News center > News > Headlines > Context
Ultraman regrets: OpenAI chose "non-profit" due to ignorance
Editor
2024-12-09 11:02 7,811

Ultraman regrets: OpenAI chose

Image source: Generated by Unbounded AI

OpenAI’s latest interaction data, just like that, Ultraman shook it out:

More than 300 million active users every week, and users send 1 billion messages on it every day130 Thousands of U.S. developers use OpenAI to develop, and the number around the world is even larger

The rapid growth of users and frequent use have brought about a sharp increase in computing volume, which OpenAI itself did not expect.

Altman personally stated that if he had known this at the time, OpenAI would not have chosen a non-profit structure from the beginning.

The above is the latest statement made by OpenAI CEO Altman in an interview at the just-concluded 2024 New York Times DealBook Summit.

Of course, he confided more than this, as well as:

The subtle relationship with Microsoft has a negative impact on Musk’s xAI confrontation mentality. It is expected that ddlAGI, which can realize Super Intelligence, will come faster than imagined, but the impact It may be much lower than expected and continue with Scaling. The three key factors behind AI's transition to profit-oriented thinking: Why not take shares and only receive an annual salary? Thoughts on the current situation of AI security. The protection for human creators after the emergence of AI (soon). As a father, how do you view AI...

At the same time, Ultraman did not forget to take the time to send a tweet, leaving behind a Christmas gift that is enough to make all technology workers around the world either nervous or excited——

OpenAI will be open for 12 consecutive days starting from today , one live broadcast every day, releasing a bunch of new things, big and small.

Okay, before we officially start the carnival in December, let’s take a look at what new things Ultraman said in his interview.

The 37-minute complete video is at the end of the article. Friends who want to watch it themselves can pull it down and pass it through (manual dog head).

(The following content is in Ultraman’s own voice; some content has been deleted and adjusted without changing the speaker’s original intention)

Topic 1: Unexpectedly launching ChatGPT was the right decision

You know, at some point, for some reason , it suddenly becomes clear to you that a technology is working.

At that time, within OpenAI, we clearly felt that language models would continue to expand and they could do all kinds of useful things.

You may ask, why was ChatGPT released two years ago, instead of when GPT-3 and API were launched, or when GPT-4 was launched a few months later? Why was it released at that exact moment?

The early GPT-3 and API were powerless for many things. But one of its application scenarios as a killer app is that developers can chat with ChatGPT before actually calling the API in the playground.Oh my god, let’s quickly test whether the idea works.

Yes, people talk about everything about GPT-3, and this is the main purpose of people using it.

So our team thought, if this is what people want, let’s make it easier to use! No need to register a developer account or other complicated steps.

So we said, if this is what people want, we can make it easier to use.

Users don't have to sign up for a developer account and do all these other things, and we can even make it better at conversational chat based on everyone's favorite use cases.

So we decided: Fine, let’s turn it into a product!

It was originally planned to use GPT-4 as the backing model, because we will complete GPT-4 around August 2022. But we still pushed GPT-4 back for a while.

Let’s talk about the “some moment” at the beginning.

We have always believed that there will be an important moment in the world.

But you want to say that we knew at that time that launching ChatGPT was the right decision? Of course not!

Topic 2: Expect AI skeptics to be "wow" next year

< /p>

There is uncertainty in everything, just like people have different definitions of Super Intelligence.

Two years ago I said that humans might have Super Intelligence within a few thousand days. Two years ago, I felt like we were on a pretty steep curve and starting to open our eyes to the world.

So I think it is possible to "have Super Intelligence within a few thousand days". If things go well, maybe this day will come soon - of course we hope that the road ahead will be smooth, just like we believe in deep learning It's like an incredible new discovery for mankind.

We believe that we can reach Super Intelligence, and OpenAI has the responsibility to advance AI to this stage, and then let everyone benefit from it widely.

Yes, we believed we could, and we believe we can now.

But everything is full of unknowns, and there is still a lot of hard work, research, engineering and the like.

In the near future, we can have Super Intelligence.

I estimate that by around 2025, we will have an AI system that attracts attention. At that time, even those who are still skeptical will say "Wow" beyond all their expectations.

You know, you can give an AI system a very complex task, just like giving a task to a smart person. it takesTake your time, use some tools, and create something valuable

- I expect this to happen next year.

Topic 3: Scaling will continue, the key is calculation + data + algorithm

A few weeks ago I tweeted:

there is no wall

Scaling, to put it bluntly, is here to stay.

Many people always like to speculate, such as whether it has hit a wall, whether scaling is still effective... Sometimes I am shocked by everyone's guesses.

Why not just look at the progress curve? In my opinion, betting on one index against another is a poor choice.

Let’s put it this way, Scaling has three keys: computing power + data + algorithm.

There is no fixed matching ratio between them. For example, if you have better data, you can calculate less; or if you have more computing power, you can use it to synthesize data and produce more data.

For a while, what we had in front of us were incredible gains and easy gains; but recently, we have made a lot of progress in algorithms.

At different times, the rewards may be different.

But to be honest, the advancement of algorithms is "the big, the biggie, the biggest one".

Transformer is an excellent recent example. This happens rarely, but when it does, the benefits are huge.

But in fact, these three areas need to be promoted simultaneously. The returns in one aspect may be higher at different times, but we always pay attention to the coordinated development of these three areas.

Although the reality is that I think everything is competitive, the computing power competition (the computer one) is the most interesting, the most dramatic, the most important, and it gets the most attention.

This is really important, and I don’t want to ignore it, but everyone is working extremely hard, and no one can come up with better algorithms and protect new data sources.

So I think computing power, algorithms, and data all need to be paid attention to.

But!

Anyway, we’ve got a bunch of new goodies, so we’re doing some fun stuff:

Starting tomorrow, we’re doing a 12-day OpenAI livestream, at For the next 12 working days, launch something or demo every day.

There won’t be many spoilers here, but in short, it is continuous progress.

Topic 4: "I have never heard of anyone worrying about using Microsoft services"

(Interviewer mentioned: You have a cooperative relationship with Microsoft, OpenAI now relies on them...but it is obvious that you are unbundling)

I don't think we're drawing a line.

I won’t pretend that there are no disagreements or challenges at all, but overall, this cooperation is a very positive thing for both companies, and we look forward to doing more things together in the future. .

Maybe we have some ideas that the outside world will find very crazy, such as high risk and high return.

OpenAI needs to ensure that it obtains enough computing resources that we want, but this does not mean that OpenAI needs to become a company that is good at building super clusters. We can rely on cooperation with Microsoft.

This may be the result of my own growing habits.

Every entrepreneur in the past had to develop hardware and computing clusters. And then all of a sudden AWS changed that, didn't it?

So I've always been open-minded. You may have to do it all on your own at first, but some integration efforts will increase in importance over time.

And OpenAI prefers to focus on the research and product development we are good at.

On November 1st, ChatGPT launched the AI ​​search experience, which is my favorite product/feature of ours for so long. I'm a heavy user and it has completely changed the way I use the internet.

Now our product has expanded to a considerable scale:

Two years ago, ChatGPT was sparse, but now there are more than 300 million active users every week, and users send messages to ChatGPT every day More than 1 billion messages, and about 1.3 million US developers are using it, not to mention the global total.

So we need a lot of computing power, much more than we expected.

Rapid expansion is an unusual thing in business history. However, I've never heard of anyone being upset about using Microsoft services.

We have things we are good at, and Microsoft is really good at its own business. We really need to find a balance between the two.

But let me reiterate, there is no daggers drawn between us. Overall, I think the motivations of the two are consistent.

Topic 5: AGI is coming quickly, but its influence will be far lower than expected

It has been said before that OpenAI’s goal is to regard AGI as a milestone.

We also gave ourselves a lot of room to maneuver along the way because we didn’t know what to expect.

But I guess AGI is coming sooner than most people expect, but its actual impact may be far lower than popular imagination.

After the emergence of AGI, the world will continue to operate, but the economy will develop faster and the growth rate will accelerate.

The bigger challenge comes from Super Intelligence after AGI, and that is the stage we need to pay more attention to.

But the evolution from AGI to Super Intelligence will be a long and protracted process.

However, even in the AGI moment, there are some things that need to be paid attention to. For example, the impact on some social inertia will be more prominent a few years after the arrival of AGI; with every major technological change, many people's jobs will change or move, but I think we can always find something to do.

But I bet we've never seen it grow this fast. I believe researchers will find ways to avoid dangers as much as possible.

I am an overly optimistic person by nature. I mean the smartest people in the world will solve a series of technical problems. OpenAI is working hard and others are working very hard. We are still working hard. There is incredible deep learning that can help us solve these very difficult problems.

AGI brings not only social impact, but also the creation of a large number of jobs and economic value.

But like true Super Intelligence, this system is not only smarter than you or me, but smarter than all of us combined, it’s truly incredible.

We are technically responsible for making it safe, and there will certainly be relevant policy issues. I also feel that global coordination is needed to deal with this situation.

Topic 6: Iterative development is the best way to ensure security

< p>AI large model is a new technology that is developing very rapidly. At the beginning, we didn’t know how to use it to coordinate all parties.

And now, most people in society generally believe that it is an acceptable and reliable security.

It is actually difficult to define exactly what ChatGPT means.

There must be some people who think that chatting with ChatGPT is not safe enough. It allows some things that should not or cannot be done; some people will say that you can chat with ChatGPT, which may be safer than we think.

But what are the plans for the next system? What should we do? Some would insist that launching such an AI system is unsafe because it accelerates global competition, which reduces the time we have to work on safety.

So, we stick to our guns and deploy iteratively according to our own position.

We must bring these systems to the world, and society and technology must evolve together. At the same time, you must enter the game when the risk is low, and understand how people will use it and where it will not work.Light, what role it can play.

Others say, of course there are benefits, but it’s not worth the cost!

But we insist that this iterative approach is the best path to security.

Topic 7: Prefer to compare AI to transistors

If Deep learning is regarded as something similar to the laws of physics. If humans discover this important scientific law, then this part of the technology will definitely be mastered by many people.

Everyone has their own opinion on the analogy of AI: some compare it to electricity, some say it is like the Industrial Revolution, and others say it is like the Renaissance.

I personally like to compare AI to transistors.

The transistor was a scientific discovery that was initially mastered by only a few companies, but it revolutionized our society and achieved astonishing scale. One more thing, when people talk about the "Law of Scaling," I think the best analogy is Moore's Law.

Later, transistors were widely used by companies around the world, and today almost everything around us is inseparable from transistors, but we wouldn’t call something a “transistor device”, nor would we call it a Google were "the transistor company," even though they couldn't exist without transistors.

I think AI will have a similar trend, and shockingly powerful models will appear in the future and be widely used in various fields.

People will not be able to imagine that the devices, products and services they use are not intelligent, and these companies and products may call themselves AI companies or AI products. In a sense, AI itself as an "engine" will become commoditized.

That’s no problem, that’s good – science should be accessible to the entire society, which is why we focus on building products like ChatGPT.

Topic 8: He has regarded Musk as a superhero since he was a child. He sued because "we are great"

Musk's xAI is a competitor of OpenAI, and it is a competitor that deserves to be taken seriously.

We have expected this from the beginning, and now it seems that many of xAI’s cutting-edge models are very close to ours.

I admire them very much for how quickly they can build a super cluster.

I feel very sad that Musk is suing us recently.

I have regarded Musk as a superhero since I was a child. What he has done for the world has made me feel extremely amazing.

Of course I feel differently now, but I'm still glad he exists.

It's not just that I think his company is great - it'sIt is indeed true - but also because he pushed many people, including myself, to think more ambitiously in an era when most people lacked ambition for the future.

I am grateful for this, or appreciate it.

We co-founded OpenAI, but later he completely distrusted OpenAI and chose his own direction.

That’s no problem.

I have always thought that Musk is a builder, a person who cares very much about whether he can become "that key person". I think he is the kind of person who will compete in the market and technology fields, rather than resorting to legal means.

Whatever his charges are, I think he does it because he's a competitor and we do a good job.

This makes me sad.

A recent article in the Wall Street Journal speculated on whether we should be wary of Musk’s influence. I'm actually not worried. Of course, I might be wrong in the end, but I firmly believe that Musk will do the right thing.

Using political power to attack competitors and benefit one's own business, I don't think people will tolerate this kind of behavior, and I believe Musk will not do it either.

While there is much I dislike about him, if he did this it would be completely contrary to the core values ​​I believe he holds dear.

As for his belief that OpenAI is preventing some potential investors from supporting xAI and other competitors he founded, that is absolutely not the case.

Ours is very clear:

If someone invests in us and also wants to invest in one of our competitors, that is perfectly fine, but we will limit their access to information.

This is a very common term given our company's size and influence. We will no longer disclose information such as our research roadmap to these investors, but they will still be able to invest in competitors.

Many people find this reasonable.

Topic 9: If I had known this, OpenAI would not have chosen a non-profit model from the beginning

p>

Now I want to talk about why we chose the non-profit model in the first place.

When we started, OpenAI did not plan to become a product company, and we did not understand the scale of capital we would need.

Had we known this at the time, we would have chosen a different structure.

It’s hard to imagine, but the environment in 2016 was completely different from now. At that time, we had not developed a large language model, nor did we have any products. Our focus is on writing papers, developing new reinforcement learning algorithms, researching theory, making robots play video games, and even developing a robotic hand.

At that time, we weren’t sure if there would be a product or revenue stream, and we didn’t know if there was a need.

Later, with the release of GPT-1 and other work results, IWe realized that we needed to expand massively.

At the same time, Musk also stopped providing non-profit financial support to OpenAI. We were unable to find other funding sources, so we decided to try to set up a limited profit model.

This model worked for a while and still holds true to some extent. But as we move into the next phase of development, the scale of funding required has begun to challenge this nonprofit-controlled model.

But no matter what, nonprofits are not going away.

For example, one possibility the board discussed was to create a public benefit corporation (PBC) in which the nonprofit would majority own the shares and then use that wealth to serve the nonprofit's goals. Of course there are some other ideas.

Topic 10: I want to go back in time and get some shares, just so I don’t have to answer "why not" now

I have never taken any shares in the company, and my annual salary is $76,000.

There are indeed many media reports on this, but it is a bit strange to say that I did not take any shares.

If I could go back in time, I might choose to take a small share just so I don’t have to answer this question to others now (laughs).

I've explained many times that my current job is the most interesting, coolest job in the world, and it's my dream job for the rest of my career.

My career is already in a good place and now I can spend my time doing things I love and work on these projects for little to no pay, which is no surprise to me.

However, this explanation is often not fully understood. So I do regret not taking some shares.

This doesn’t affect my work ethic or effort, but it may make the goals between me and the investors a little clearer, and it does make raising money easier.

Just like some investors choose not to invest in OpenAI because I don’t have shares - this has happened several times.

For me, this job is my childhood dream...although not every day is easy, and there are many "hit the wall" moments.

But it has always been my dream to be able to participate in the research and development of AGI, work with the smartest researchers in the world, and participate in this crazy adventure.

None of these strange questions and situations can hide the fact that:

This experience is worth more to me than any extra money.

Topic 11: New standards supporting "right to learn" protect creators' rights

p>

The New York Times is currently reviewingOpenAI and Microsoft filed a lawsuit over content used when training models. There are many content creators whose livelihood depends on creating.

I think we do need a new set of protocols or standards, whatever you want to call it, to ensure that creators get the rewards they deserve.

I strongly support the concept of "right to learn".

Just like if an AI reads a physics textbook and learns physics, it should be able to apply that knowledge to other things just like humans. I feel like some parts of copyright law and fair use need to continue to apply.

But at the same time, there are many new directions to explore.

For example, I have been paying close attention to how to implement micro payments systems.

If someone generates an Aaron Sorkin-style script, create Authors may choose to allow their name, image and style to be used and receive revenue from it.

△Aaron Sorkin

However, I think it is wrong to discuss the level of fair use of AI now.

Of course, we strongly support approaches similar to the "right to learn". But I believe we do need new economic models to help creators open up more sources of income.

As for the New York Times's position, I don't want to be disrespectful by being a guest in someone else's "home."

This conversation took place at the New York Times’ DealBook Summit, so the interviewer laughed after hearing this: “We will discuss and debate this, maybe in court.”

Last Topic: From the perspective of human existence, the development of new technologies is of little significance

I might have a baby with my partner next year.

Honestly, nothing touches me as deeply as welcoming a child.

Even just getting ready to welcome a baby makes AGI seem trivial to me - even though I am passionate about AGI.

But in comparison, the anticipation of children makes me feel more excited. It made me re-examine what is really important.

This actually reflects a common phenomenon.

We have been developing amazing new technologies, and there are similar discussions every time: during the industrial revolution, machines took away our jobs; during the computer revolution, computers replaced many professions.

So, what does this mean?

In terms of the meaning of human existence, there is actually not much change. The economy will grow and what people do will change. But no matter how advanced the technology is, people always love their children far more than they care about them.AGI or other technology concerns.

The deep human drives are so powerful and have persisted for so long that even though my children are growing up in a very different world, this world is, in some ways, difficult for them. Said it would still be the same as it is now.

Reference links: [1]https://www.youtube.com/watch?v=tn0XpTAD_8Q[2]https://x.com/rowancheung/status/1864404295258615858[3]https://x. com/OpenAINewsroom/status/1864373399218475440
Keywords: Bitcoin
Share to: