News center > News > Headlines > Context
Ultraman collapsed and admitted his mistake: ChatGPT was plundered by users, and OpenAI suffered a big loss! Exclusive interview on painful memories of the palace fight incident
Editor
2025-01-08 15:02 2,627

Ultraman collapsed and admitted his mistake: ChatGPT was plundered by users, and OpenAI suffered a big loss! Exclusive interview on painful memories of the palace fight incident

Image source: Generated by Unbounded AI

Ultraman admitted that he made a mistake! ChatGPT Pro was priced at US$200, thinking it would make a profit at the cost. Unexpectedly, users used it too frequently, which directly wiped out OpenAI... In addition, in an exclusive interview with Bloomberg, Altman reviewed the eye-catching four-day forced palace incident. , and expressed that it remains committed to AGI.

Ultraman regrets it!

Recently, Ultraman revealed in an interview that he decided the price of ChatGTP Pro by tapping his forehead.

Unexpectedly, the user was too harsh and wiped out OpenAI, resulting in a serious loss!

Ultraman posted on X that ChatGPT Pro is actually losing money. When he priced it at $200, he thought it would be a sure-win deal for OpenAI, given how capital-intensive the business is.

Obviously, he misjudged the user's actual use. Therefore, it is urgent to revise the pricing strategy of the model.

However, discussions about pricing and products are not the focus. In Altman’s mind, OpenAI’s ultimate goal of building AGI and ASI is the most important.

But this pricing incident also highlights the fact that there is no evidence that expanding the scale will achieve AGI, but this approach will bring huge costs, but it is certain.

If the query cost of the o3 model is as high as 1,000 US dollars, then the cost of o4 may be tens of thousands. Is AGI really affordable for ordinary people?

In a recent interview with Bloomberg, Altman reviewed the vigorous four-day "forced palace" incident, how he operated OpenAI in the first two years of ChatGPT, and his unremitting pursuit of AGI.

Highlights of the interview are as follows:

1. The time when Ultraman and Ilya had dinner together was the most important moment in the creation of OpenAI 2. In the early days of OpenAI, it stood out due to its banner of "General Artificial Intelligence (AGI)". Traveling alone becomes a recruiting secret that attracts like-minded people 3. The experience of venture capital made Altman believe that ChatGPT would become popular all the way. 4. The ChatGPT subscription model was just a stopgap measure at the time. 5. Altman recalled how he was fired within four days before being hired. Go back 6. On the day he was fired, Ultraman already understood It is certain that OpenAI’s organizational structure must be reorganized 7. After becoming famous, Ultraman felt a strange sense of distance 8. It is important to protect core research and development 9. The prototype of AGI is an AI that can replace excellent software engineers; the key to super intelligence lies in the ability to Whether to accelerate scientific discovery 10. User feedback on It is important to improve ChatGPT; ChatGPT has helped many patients with difficult and complicated diseases receive treatment 11. Controllable nuclear fusion reactor is the best AI green energy solution

On November 30, 2022, the traffic of the OpenAI website reached a level close to zero peakvalue. At that time, for such a small startup with limited activity, even the boss was too lazy to track the number of visits.

It was a quiet day, the last quiet day the company knew.

Two months later, OpenAI’s website traffic surged, exceeding 100 million visitors. People experienced ChatGPT and were excited and terrified by its powerful capabilities.

Since then, there has been no return to the original form for anyone, especially Ultraman.

OpenAI Early History

Q: Your team recommends that now It's a good time to look back at the past two years, reflect on certain events and decisions, and clarify some issues. But before we get started, can you tell the story of OpenAI’s founding moment again? Because the historical value of this event seems to be increasing day by day.

Ultraman: Everyone wants a concise story: a clear moment in the story where something happens. Conservatively speaking, I would say that OpenAI experienced at least 20 founding moments in 2015. The highlight moment for me personally was the dinner Ilya and I had at the Counter restaurant in Mountain View, California, just the two of us.

If we go back in time, I have always been very interested in AI. I studied this in undergrad. I was distracted for a while, but by 2012, Ilya and others made AlexNet (convolutional neural network). I continued to follow this progress, and I thought to myself, "Oh my gosh, deep learning seems to be real. And it looks scalable. There's a lot of potential here, and I should take the opportunity to do something about it."

< p>So I started meeting a lot of people and asking them who would be suitable to do this with me. In 2014, AGI was still a pure technical illusion. At that time, people didn’t want to talk to me at all. When it came to AGI, everyone thought it was a joke and might even ruin their careers. But someone suggested that there is one person you must talk to, and that is Ilya. So I found Ilya at a conference, stopped him in the hallway, and we talked. I thought to myself, "This guy is really smart." I kind of told him what I was thinking, and we decided to have dinner together. At our first dinner, he basically laid out our strategy for how to build AGI.

The continuation of the entrepreneurial spirit

Q: What other aspects of the spirit of that dinner continue in today’s OpenAI?

Ultraman: Basically everything. We believe in the power of deep learning and believe that through specific technical methods and a path of collaboration between research and engineering, AGI can be realized. To me, the effect of it all is incredible. Often, most technical inspirations don’t quite work out, and obviously our initial ideasThere were some things that just didn't work, especially the structure of the company. But we believe that AGI is entirely possible. If it is really feasible, it will be a major breakthrough for society.

The secret to attracting talents

Q: One of the initial strengths of the OpenAI team is recruitment. Somehow, you've managed to attract a lot of top AI research talent, often offering significantly less compensation than your competitors. How do you attract these talents?

Ultraman: Our secret is: come on, let’s build AGI together. The reason why it was effective was because at the time, talking about building AGI seemed very strange and was regarded as heresy by ordinary people. In this way, 99% of the people are screened out, and the rest are all those who are truly talented and have the ability to think independently. This is very inspiring. If you are doing the same thing as others, such as making the 10,000th photo-sharing app, it will be difficult to attract talents. But for work that no one is doing yet, a small number of really talented people will be attracted to it. So our positioning, which sounded bold and even somewhat questionable at the time, attracted a group of talented young people.

Q: Can you quickly adapt to your respective roles?

Ultraman: Most people had full-time jobs at the time, and I also had a job at the time, so I did less at the beginning. But as time went by, I fell in love with it more and more. . By 2018, I was completely hooked. But for the first time it was like a "Band of Brothers" approach. Ilya and Greg manage it, but everyone has their own things to do.

Q: It seems that the first few years were quite a romantic time.

Ultraman: Well, that was definitely the most interesting moment in the development of OpenAI. I mean, it's interesting now, but given the impact it had on the world, I think it was one of the greatest periods of scientific discovery and a once-in-a-lifetime experience.

Taking over as CEO

Q: In 2019, you took over as CEO. How did this happen?

Ultraman: I tried to balance the work of OpenAI and Y Combinator, but it was too difficult. The idea that we could actually build AGI really appealed to me. Interestingly, I remember thinking at the time that we would achieve AGI by 2025, but that number was completely arbitrary and it was only our tenth year. People used to joke at the time that the only thing I did was walk into conference rooms and say, “Scale!” While that’s not true, scaling was a core focus at that time.

The release of ChatGPT

Q: The official release date of ChatGPT is November 30, 2022. Does that feel like a long time ago, or like a week ago?

Ultraman: I will be 40 next year. On my 30th birthday, I wrote a blog titled"Days are long, but ten years are short." Someone emailed me this morning and said, "This is my favorite blog and I read it every year. When you are 40, are you going to update this blog?" I laughed because I definitely won't. I don't have time. But if I updated it, the title would be "A few days are long, a decade is even longer." Anyway, that seems like a long time ago.

Q: When the first large number of users started to pour in, and it was obvious that this was a big event, did you have a moment of amazement?

Ultraman: There are indeed a few things. One, I believe ChatGPT has done a pretty good job, but the rest of the company is saying, "Why are you asking us to release this? This is the wrong decision, it's not ready." I rarely do the "We're going to release this" thing. It's a decision to make, but it's one of them.

YC has a famous chart, which is the potential curve drawn by co-founder Paul Graham. As the novelty wears off, new technologies will have a long trough before product and market fit and finally take off. This is part of the YC culture. During the first few days, ChatGPT was used more during the day and less at night. The team was saying, "Hahaha, it's going down." But one thing I learned at YC is that as long as the new low is higher than the previous peak, extraordinary things will happen. . In the first five days, it looked like this, and I thought, "We're definitely going to exceed expectations."

Paul Graham's Potential Curve

This sparked Crazy competition for computing resources. We urgently needed a lot of computing resources, but we were not ready at the time because we did not have a clear business model when we released ChatGPT. I remember in a meeting in December of that year, I said, "I will consider some ways of paying for this, but we can't continue discussing it like this." There were a lot of bad ideas proposed at that time, and there was no good idea. So, we had to say, "Well, let's try a subscription model and figure it out later." Just stick with it until now.

We released GPT-3.5, and with GPT-4 coming soon, we know it will be even better. When I talk to other people about the usage, I also emphasize, "I know we can do better." We improved it quickly. This made the global media realize that the turning point had arrived.

On March 13, 2023, some OpenAI executives were at the company headquarters in San Francisco.

Q: Are you a person who can enjoy success with peace of mind? Or are you already worried about the next stage?

Altman: My career is a bit strange: the normal trajectory is that you run a large, successful company, and then when you are 50 or 60 years old, you get tired of your previous hard work and switch careers. "Venture Capital". It is a very rare career path for me to first enter venture capital, persist in a long venture capital career, and then become a company boss. Although there are many waysPart of it I don't think is good, but one aspect I think is really good is knowing what's going to happen because you've watched and coached multiple people on how to run a company.

At that time, I felt that one side was full of gratitude, and the other side was a bit like, "I felt like I was strapped to a spaceship, and my life was turned upside down, and it wasn't that interesting." My significant other often told interesting stories about that period. . Whatever it was like when I came home, he'd be like, "That was great!" and I'd be like, "That's really bad, and it's not good for you. You don't realize it now, but it's really bad."

Q: You have been famous in Silicon Valley for a long time, but the arrival of GPT has made you famous around the world. This speed of popularity is comparable to stars such as Sabrina Carpenter or Timothée Chalamet. Will this affect your ability to manage employees?

Ultraman: It makes my life more complicated. But in the company, no matter whether you are famous or not, everyone only cares about: "Where is my GPU?"

But in other aspects of life, I feel a sense of distance. This feeling is new to me. I can detect this strange feeling when I'm with old friends and new ones (except those I'm particularly close to). I think I do feel that way at work if I'm around people I don't normally interact with. If I have to sit in a meeting with a group I've almost never met, I know for a fact that it's there. But I spend most of my time with researchers in person. I promise you, come to a research meeting with me after this, and you'll just see that people are not nice to me at all. This is great.

Four Hard Days

Conflicts Emerge

Q: A profitable company with billions of dollars in external investment , but would need to report to a nonprofit board, which could be a problem. Do you remember when you realized this problem?

Ultraman: There are definitely a lot of moments like this. From November 2022 to November 2023, the memory of that whole year is very blurry. It feels like in 12 months we built an entire company from scratch, and in public. Looking back now, one of the lessons I learned is that everyone says they can’t get the relative order of importance and urgency wrong, but it turns out they can’t. So I would say the moment I woke up to reality and realized this wasn't going to work was 12:05 on that Friday.

Hard to get by

Q: It was really shocking when the news broke that the board of directors had fired you from your position as CEO. But you seem to be a very emotionally intelligent person. Did you detect any signs of nervousness before this? Do you know that you are the source of that tension?

Ultraman: I don’t think I am a person with high emotional intelligence at all, but even I can detect this tense atmosphere. You know, we're constantly talking about safetywith capabilities, the role of the board and how to balance all of those issues. So I knew the atmosphere was tense, and I was not a person with high emotional intelligence, so I didn't realize that things were getting out of hand.

A lot of crazy things happened that weekend. My memory of that time - and I may not remember the details exactly right - is that they fired me at noon on Friday, and that evening quite a few other colleagues decided to resign as well. By the end of the night, I thought, "How about we start a new AGI project." Later that night, some executives said, "Well, we think things can turn around. Calm down and wait for our news." p>

On Saturday morning, two board members called to chat about whether I would go back. I was very angry at first and refused immediately. And then I was like, "Okay, okay." I really care about OpenAI. But I also said, "As long as the entire board quits." I now wish I had done it differently, but at the time it felt like my request was reasonable. And then we did have a big disagreement on the board issue. So we started negotiating to form a new board of directors. Both sides felt unreasonable about some of the other's ideas, but overall we reached an agreement.

Then came Sunday, which was my most irritable day. From Saturday to Sunday, they kept saying, "It's almost done. We are just seeking legal advice and the board of directors' consent letter is being drafted." I repeatedly emphasized that I did not want to dismantle OpenAI and hoped that the other party would tell the truth. The other party assured, "Yes, you will come back, you will definitely come back."

On Sunday night, they suddenly announced that Emmett Shear had become the new CEO. I was like, "Damn, now I'm really screwed," because I was completely and utterly deceived. By Monday morning, a lot of colleagues were threatening to resign, and then the board said, "Okay, we need to change our decision."

The fallout is still lingering

Q: The board said, conducted an internal investigation and concluded that you were "not always candid" in your communications with them. It's a specific accusation: they think you're lying or withholding information; but it's also vague: it doesn't say what exactly you're not being honest about. Do you know now what issues they are referring to?

Ultraman: I have heard different versions. One of them was, "Sam didn't even tell the board that he was going to release ChatGPT." That's not my memory and understanding. But the truth is, I certainly didn't say, "We're going to release this thing and it's going to cause a huge stir." I think a lot of the characterizations on that side of the board are unfair. One of the things I know better is that I have had disputes with different board members. They were also not happy with the way I tried to oust them from the board. I learned from it.

Q: At some point you realized that OpenAI’s structure would stifle the company’s development. Because one has a missionNon-profit organizations driven by AI will never be able to compete for sufficient computing resources and achieve the rapid transformation required for OpenAI to thrive. The Board of Directors is a group of idealists who put purity before survival. So you start making decisions to allow OpenAI to compete, which may require some sleight of hand that the board of directors is completely unacceptable to.

Ultraman: I don’t think I’m playing tricks. All I can say is that in order to take quick action, the board did not fully understand the context of the problem. One thing mentioned that "Sam has a venture fund, but he didn't tell us." The truth of the matter is that OpenAI's operating structure is very complicated. Neither OpenAI nor those who hold OpenAI equity have direct control over the venture fund. And I happen to be the one who doesn’t own OpenAI. So I'm holding on temporarily until we establish a mature equity transfer structure. I don't think this matter needs to be reported to the board, and now I'm open to people raising questions about it and will explain it in a clearer way. But at that time, OpenAI was developing rapidly like a rocket, and I really didn’t have time to explain it. If you get the chance, you can talk to the current board members and ask them if they think I've been pulling any stunts, because I've tried to avoid that.

OpenAI’s current structure

The previous board of directors firmly believed that AGI could go wrong, and I think their persistence and concerns were honest. For example, over the weekend, someone on the board said to the team here, “It might be in line with the nonprofit board’s mission to destroy the company.” This sentence has attracted ridicule from everyone, but I think this is the real power of faith. I believe she meant it when she said this. Although I completely disagree with the specific conclusions, I respect the ideals and persistence. I think the last board acted out of a sincere but misguided belief that AGI was within our reach and that we were not accountable for it. I respect their starting point, but I completely disagree with their specific approach.

Q: Obviously, you won in the end. But wouldn’t you feel traumatized?

Ultraman: Of course, I was scared. The hardest thing was not going through it because there was so much adrenaline rush I could do in those four days. I am also very touched by the support from my colleagues in the company and the broad community. This soon passed, but every day it got worse. Another government investigation, another old board member leaking fake news to the media. The people who made me and the company suffer, I think are gone, but now I have to clean up the mess they left. It was December, and it got dark very early, around 4:45 in the afternoon. It was wet, cold, and raining, and I was walking around the house alone, feeling tired and depressed. I felt very unfair and felt that I didn't deserve to be treated like this. But I can't stop because there are all kinds of "burning" things to deal with.

Q: When you return to the company, will you worry about other people’s strange eyes?Are you worried that some people think you are not a good leader at all and need to rebuild their trust?

Ultraman: The actual situation is worse than this. After things were clarified, everything was fine. But for the first few days, no one knew what was happening. When I walk in an office building, people will avoid looking at me. Like I was diagnosed with terminal cancer. There was sympathy, there was empathy, but no one knew what to say. It was really hard. But I thought at the time: "We still have a complicated job to do, and I want to keep doing it."

How to run a business

Q: Can you talk about how to run a business? Corporate? How does your day go? For example, will you be working one-on-one with engineers, or will you be walking around the building?

Ultraman: Let me see my schedule. We have a three-hour executive team meeting every Monday, and then yesterday and today I had one-on-ones with six engineers, in addition to attending research meetings. There are several important cooperation meetings tomorrow, as well as many computing resource-related meetings. Tomorrow I have five meetings about building compute resources, three product brainstorming sessions, and then I have dinner with an important hardware partner. That's about it, there are some fixed tasks every week, and then most of the time is spent dealing with unexpected things.

Q: How much time do you spend on internal and external communication?

Ultraman: Mainly internal communication. I don’t write inspirational emails, but I do a lot of one-on-one or group discussions, and I communicate through Slack.

Q: Do you feel trapped in it?

Ultraman: I am a heavy Slack user. I am used to sorting out data in a mess, and I can get a lot of information in Slack. While conversations with small research teams can provide insight, broader conversations can also yield valuable information.

Q: You have talked about the appearance and user experience of ChatGPT before, and your views are very clear. As a CEO, in what tasks do you feel you must be personally involved instead of directing like a coach?

Ultraman: For a company of the size of OpenAI, there are very few opportunities for direct participation. I had dinner with the Sora team last night and wrote several pages outlining my recommendations in detail, but that doesn't happen often. Sometimes, after a meeting with the research team, I will make very specific recommendations that involve specific details of what the next three months will look like, but this is also very uncommon.

R&D and Operations

Q: We have talked before that scientific research may sometimes conflict with the company's operational structure. Is there any symbolic meaning behind the fact that you separated the research department from the rest of the company and put it in another building a few miles away?

Ultraman: No, that’s just because of logistical arrangements and space planning. We will build a large campus in the future, and the research department will still have its own dedicated space. SaveProtecting core research is very important to us.

Q: What is the research department protected from?

Altman: The usual approach for Silicon Valley companies is to start with products. As the scale expands, revenue growth tends to slow down. Then one day, the CEO might launch a new research lab with a slew of new ideas to drive further growth. There have been several successful examples of this in history, such as Bell Labs and Xerox. But that's often not the case. Companies are successful with their products, but their R&D is getting weaker and weaker. We are lucky that OpenAI is growing very fast and may be the fastest growing technology company in history, but it is also easy to lose sight of the importance of R&D, and I will not let that happen.

We come together to build AGI and superintelligence, and a higher purpose. During this process, many things may distract us from the ultimate goal. I think it's very important not to let yourself get distracted.

AGI Definition

Q: As a company, you seem to have stopped talking about AGI publicly and instead discussed different levels of AI, but you personally are still keen to talk about AGI.

Ultraman: I think the term AGI has become very vague now. If you look at our five tiers, you'll find that at every tier there are people who think it's AGI. The reason why we divide it into different levels is to show our position and progress more specifically, rather than to discuss whether it is AGI or not.

Q: What is the threshold at which you say "Okay, we have now implemented AGI"?

Altman: My rough understanding is that when an AI system can replace skilled human practitioners in important tasks, I will call it AGI. Of course, it will bring about many follow-up questions, such as whether to completely replace or replace some of the links? Can a computer program decide on its own that it wants to be a doctor? Does its capability reach the top level of the industry, say the top 2%? How autonomous is it? I don’t have in-depth, precise answers to these questions yet.

But when AI can replace the excellent software engineers hired by companies, I think many people will think, well, this is the prototype of AGI. Of course, we're always adjusting the standards, which is why defining AGI is difficult. And when I talk about superintelligence, the key question is whether it can rapidly increase the speed of scientific discovery.

User Feedback

Q: You now have more than 300 million users. What have you learned about ChatGPT from user feedback?

Ultraman: Discussing with users what they do and don’t do with ChatGPT is very helpful for our product planning. Many people are using ChatGPT to search, and this was not our original design goal. And its search performance was really bad at the time. But later search became an important functionable. To be honest, I barely use Google anymore since we launched search functionality in ChatGPT. When I first prototyped it internally, I had no idea that ChatGPT would replace my use of Google.

Another thing to learn from users is how much people rely on it for medical advice. Many colleagues who work at OpenAI will receive touching emails. For example, someone said: I have been sick for many years, and no doctor told me what the disease was. I entered all my symptoms and test results into ChatGPT, and it told me I had a rare disease. I went to the doctor, took the recommended treatment, and am now fully recovered. Although this is an extreme example, similar things happen often. This made us realize that people have such needs and we should invest more in medical development.

Pricing strategy

Q: Your product pricing ranges from free to US$20, US$200, and even US$2,000. How to price unprecedented technology? Is it based on market research or just random guessing?

Ultraman: When we first launched ChatGPT, it was free, but then the number of users began to surge, and we had to find a way to support operating costs. At that time we tested two prices, $20 and $42. It turns out that $42 is a bit too high and users don’t think it’s worth it, but they are willing to accept $20. So we settled on $20. This will be decided probably at the end of December 2022 or early January 2023. We have not done a very rigorous pricing study.

We are also considering other directions. Many customers tell us they want to pay for what they use. For example, some months I may need to spend $1,000 on computing resources, and other months I want to spend very little. I still remember when we had dial-up Internet, AOL gave you 10 hours or 5 hours of Internet time per month. I really hate the time-based charging method, and I don't like the feeling of being restricted. So I'm also considering whether there are other more suitable pricing models that are also based on actual usage.

AI Security

Q: What does your current safety committee look like? What has changed in the past year or year and a half?

Ultraman: The troublesome thing is that we have many different security mechanisms. We have an internal Security Advisory Group (SAG) that conducts systematic technical research and provides advice. We also have an SSC (Safety and Security Committee) attached to the Board of Directors. In addition, there is a DSB (Decision-making Oversight Board) established jointly with Microsoft. So, we have an internal mechanism, a board mechanism and a joint Microsoft board of directors. We are working hard to make these mechanisms more efficient.

Q: Have you participated in all three committees?

Ultraman: That’s a good question. Reports from the SAG (Safety Advisory Group) are sent to me, but I am not a formal member.Here's the process: They create a report and send it to me. I'll take a look and send my comments to the board. I am not involved in the SSC (Safety Supervision Committee). And I am a member of the DSB (Decision-Making Supervisory Board). Now that we have a clearer understanding of the security process, I hope to make this process more efficient.

Q: Have your views on potential risks changed?

Altman: I think that in the fields of cybersecurity and biotechnology, we will face some serious or potential short-term problems that need to be mitigated. In the long term, a system that is truly capable will have risks that are difficult to accurately imagine and model. But I also believe that these risks are real, and the only way to have a chance of solving them is to release the product and learn from it.

Models, chips and energy shortages

Q: Talking about the short term In the future, the entire industry seems to be focused on three issues: model expansion, chip shortage, and energy shortage. I know these questions are related, can you rank them according to your level of concern?

Ultraman: We have established corresponding plans. In terms of model expansion, we have made continuous progress in technological advancement, capability enhancement and security improvements. I think 2025 is going to be an amazing year. Have you heard about the ARC-AGI challenge? Five years ago, it was designed as a guide towards AGI. They designed a very difficult benchmark that our upcoming model passes. This challenge has gone unsolved for five years. If you score 85%, you pass. Our system, without any customization, achieved a score of 87.5%. In addition, we will also launch very promising research results and better models.

In terms of chips, we have been working hard to build a complete chip supply chain and work together with our partners. We have a team that builds data centers and produces chips for us, and we also have our own chip research and development projects. We have a great relationship with NVIDIA, a really amazing company. We will announce more plans next year, and now is a critical time for us to expand the scale of our chips.

Q: So the energy issue...

Ultraman: Controlled nuclear fusion will work.

Q: When is the approximate time?

Ultraman: Soon. There will soon be a net gain fusion demonstration. But then there's the need to build a system that won't fail, and then scale it up and figure out how to build factories to mass-produce such a system. This requires regulatory approval. All this may take several years, and I predict that Helion will soon bring a visible and tangible controllable nuclear fusion solution.

Q: In the short term, is there any way to avoid affectingSustaining the pace of AI development despite climate goals?

Ultraman: Yes, but in my opinion, nothing is better than approving a controllable fusion reactor as soon as possible. I think a certain type of nuclear fusion approach is awesome and we should go all out to make it happen.

Trump-Musk Administration

Q: Did you just Many of the things mentioned involve government. Now that the new president is about to take office, you have personally donated $1 million. Why?

Ultraman: Because he is the President of the United States. I support any president.

Q: I understand that it seems reasonable for OpenAI to support a president who cares about personal grievances. As a personal donation, Trump opposes many things that you have supported in the past. Am I supposed to think that this donation is more about loyalty than patriotic belief?

Altman: I don’t support everything Trump does, says or thinks. I don’t support everything Biden does either. But I support America and am willing to work with any president to the best of my ability to serve the interests of the country. Especially at this critical moment, I think this transcends all political issues. I think artificial general intelligence (AGI) may be developed during this presidential term, and it's important to get this right. I think it's a small thing to support the president's inauguration. I don't think it's a big decision that requires careful consideration. However, I do think we should all want the president to succeed.

Q: He said he hates the "CHIP Bill" and you support the "CHIP Bill".

Ultraman: Actually, I don’t support it either. I think the CHIP Act is better than doing nothing, but it's not the best approach we should take. I think we have an opportunity to do better next steps. The CHIP Act did not have the effect we all hoped for.

Q: Obviously, Musk will play some role in the government. He is suing you and competing with you. I saw your comment on DealBook, saying that you don't think he will use his position to make any small moves in the field of artificial intelligence.

Ultraman: Yes, I do think so.

Q: But to be honest, in the past few years, he bought Twitter and tried to sue to abandon the acquisition. He unblocked Alex Jones's account and challenged Zuckerberg to a cage fight, and these are just the tip of the iceberg of his bizarre operations...

Ultraman: I think he will continue to do all kinds of things Unreliable things. He may continue to sue us, drop the suit, file a new suit, or something like that. He challenged me to a cage fight, but it didn’t look like he was actually challenging Zuckerberg. He will say a lot of things, try to do a lot of things, and then regret it; he will sue others when he is sued; he will have conflicts with the government and be investigated by the government. That's his style. As for whether he willWill he not abuse his political power to deal with business competitors? I don't think he will. That's what I really think. Of course, I might turn out to be wrong.

Q: When you two work best together, what roles do you each play?

Ultraman: We are quite complementary. We weren't sure exactly what this would turn into, or what we would do, or how it would go from here, but we shared the belief that this thing was important, and that we needed to work in this general direction and in a timely manner Adjustment.

Q: I’m curious what your actual partnership is like?

Altman: I don’t remember having any particularly serious disputes with Musk before he decided to quit. Even though there were a lot of rumors - people saying he would snap at people and throw tantrums and stuff like that, I never experienced anything like that.

Q: Are you surprised that he raised so much money from the Middle East for xAI?

Ultraman: Not surprising. They have a lot of money. This is an industry that everyone wants to invest in now, and Elon is Elon.

Q: Assuming you are right, and both Musk and the government have positive intentions, what will be the most helpful initiatives of the Trump administration in the field of artificial intelligence in 2025?

Ultraman: Build a lot of infrastructure in the United States. One thing I really agree with the president about is that it's incredibly difficult to build things in America right now. Whether it's a power plant, a data center, or other similar facilities, they're all very difficult to build. I understand how bureaucracy can build up, but this situation is not good for the country as a whole. This situation is even more detrimental when you consider that the United States needs to lead in artificial intelligence. And the United States really needs to take a leadership position in artificial intelligence.

Keywords: Bitcoin
Share to: