News center > News > Headlines > Context
75 internal emails from OpenAI, a Silicon Valley entrepreneurship lesson
Editor
2024-12-25 15:03 2,134

75 internal emails from OpenAI, a Silicon Valley entrepreneurship lesson

Image source: Generated by Unbounded AI

"The inside of a startup is all a car accident scene. It’s just that some you can see in the media and some you can’t." Mentor of Sam Altman, Paul Graham, founder of Silicon Valley incubator YC, once summed up the hundreds of startups he had seen.

Thanks to the battles and lawsuits between Altman and Tesla CEO Elon Musk, we can now also see what OpenAI’s early years really looked like. Both parties each released several batches of 75 internal emails and text message records, spanning from the preparation of OpenAI in 2015 to the formation of a for-profit entity in 2019, showing how a group of Silicon Valley celebrities and talented AI researchers came together because of their ideals, and how they followed The development of OpenAI is vying for power.

These internal records, which are more than 30,000 words long, are also like an entrepreneurial class produced by OpenAI, covering how the world's largest AI startup company tells stories and builds elite lineups in the early stages, as well as salary design and equity. All aspects of allocation; from competing for talent with Google at low prices to negotiating cooperation with Microsoft, and even including cryptocurrency financing options that were considered. Moreover, we can also see how chief scientist Ilya Sutskever writes bi-weekly reports and develops AGI research plans, and how Altman controls OpenAI step by step to promote the transformation process.

We translated the full text of these internal communication records, edited them appropriately while retaining the language style and not changing the original meaning, and organized them according to the timeline to present their communication as completely as possible.

2015: Inviting Musk - From YC AI to OpenAI - Initial Team

Question

Participants:

Sam Altman took over as president of Silicon Valley incubator YC in 2014 and became OpenAI CEO full-time in 2019.

Elon Musk, then CEO of Tesla and SpaceX, invested more than $40 million in OpenAI and withdrew from the OpenAI board of directors in February 2018.

Subject: Question From: Sam Altman To: Elon Musk Time: 21:10, May 25, 2015 (Monday

I have been thinking about whether it is possible to prevent Humans develop AI.

I think it's almost impossible.

Since it's going to happen sooner or later, it's best to let someone else do it first.

What do you think of the YC startup?How about an AI project similar to the Manhattan Project? I feel like we can get a lot of the top 50 top talent involved. We can set up a structure where this technology belongs to the world through some form of non-profit agency. If the project is successful, those involved can receive compensation similar to joining a startup. Of course, we comply with and actively support all regulatory requirements.

Sam

Re: Question Sender: Elon Musk Recipient: Sam Altman Time: May 25, 2015 23:09 (Monday

This Might be worth further discussion

Re: Re: Question From: Sam Altman To: Elon Musk Time: June 24, 2015 10:24 (Wednesday

Our mission is to create the first artificial general intelligence (AGI) to augment the capabilities of individuals—the most secure distributed version of that future. More broadly, security That should be our first priority.

I think the ideal startup team is 7 to 10 people and then we have a spare building in Mountain View that they can use.

As for the governance structure, I recommend starting from 5 To begin personally, I propose that you, Bill Gates, Pierre Omidyar (eBay founder), Dustin Moskowitz (Facebook co-founder) and I technology be owned by the Foundation for. "The welfare of all mankind". If there is a situation where we are not sure how to apply it, the 5 of us make the decision. The researchers have a substantial financial return, but this reduces some of the conflict (we will pay them there). Competitive salary and give them YC Equity). We will continue to discuss what work should be open source and what should not be open source. At some point, we will find someone to lead the team, but he/she may not be on the governance committee. Outside of governance, are you going to be involved in that? I think that's very helpful in moving things in the right direction and recruiting the best talent, and ideally you can look at that every month (or something like that). Look at their progress. We usually call people who are involved in YC in some limited way. "Part-Time Partner" (like we do with Peter Thiel [Co-Founder of Paypal], even though he's very active now), you can decide what we want to call you. You can't put too much time into it, and being able to show support publicly is also very helpful in recruiting talent.

I think the right thing to do about the regulatory letter is to wait until this work is launched, and I can publish a message, Similar to "Now that we've started doing this, I've been thinking hard lately about what kind of constraints the world needs to be safe." If you don't want to sign, I'll be happy toYou are removed from the signing list. I feel that after the letter is released, more people will be willing to support it.

Sam

Re: Re: Re: Question Sender: Elon Musk Recipient: Sam Altman Time: June 24, 2015 23:05 (Wednesday )

I agree with all the suggestions.

AI docs

New participants:

Ilya Sutskever, former senior research scientist at Google, gave up millions He joined OpenAI as chief scientist with an annual salary of US$1.00. In November 2023, he joined the OpenAI directors to expel Altmann and resigned in May this year.

Subject: AI docs (OpenAI Disclosure) From: Sam Altman To: Elon Musk Time: November 20, 2015 11:48 (Friday

Elon -

Our plan is for you, I and Ilya to form YC AI (Note: OpenAI (early name), it will be a Delaware-based nonprofit, and we will also describe in the bylaws the election of two outside persons to the board. It is stated that any technology that may endanger human safety must be approved by the board of directors before it can be released. We will also mention this in the researchers’ employment contracts.

Overall, what do you think of this plan? ?

I copied our legal counsel <****> (OpenAI Hidden content, the same below), is there anyone on your side who can clarify the details with him?

Re: AI docs (OpenAI disclosure) Sender: Elon Musk Recipient: Sam Altman? Time: November 20, 2015 12:29 (Friday)

I think it should be supported by and independent of YC, instead of sounding like YC A subsidiary of the company.

Also, the current structure seems suboptimal, especially with YC's equity and nonprofit salaries, which may obscure the incentives. It is to set up a standard C-type company (note: it can be understood as a for-profit company), and also set up a non-profit organization

Telephone content follow-up

New participants: <. /p>

Greg Brockman, Silicon Valley fintech unicorn Stripe Former CTO, later served as President of OpenAI

Subject: Phone content follow-up Sender: Greg Brockman Recipient: Elon Musk Time: 18:11, November 22, 2015 (Sunday)

Hi, Elon

It was great to chat with you.

p>

As I mentioned on the phone, here is the latest version of the blog post: https://quip.com/6YnqA26RJgKr. (Sam, Elijah, and I are considering new names, and if you have any suggestions, they are more than welcome.)

Obviously, there are a lot of details that need to be worked out, but I'm curious about your thoughts on getting the message across this way. . I don't want to be evasive and I will be more direct with my message if you feel it's appropriate. I think the most important thing is that our message attracts the people doing the research (or at least the people we want to recruit). My hope is that we can enter this space as a neutral team, working together broadly, and steering the conversation toward a "win for humanity" rather than one specific team or company. (I believe this is the best way for us to become a leading research organization.)

I have attached the offer template we have been using, with an annual salary of $175,000. The following is the email template I send to candidates

The attachment is your official YC offer! Please sign and date at your convenience. We'll also send you two additional documents:

1. A separate letter offering you 0.25% of each YC bootcamp (in compensation for being a YC advisor).

2. "Free Employment Agreement", "Information Confidentiality Agreement", "Invention Transfer Agreement" and "Arbitration Agreement"

(Since this is the first time we have issued a formal offer, if there are any We apologize for any inconveniences and please let us know if you have any questions!)

We plan to offer the following benefits:

Health, dental and vision insurance Unlimited vacation, recommended for four weeks per year Paid parental leave when you present YC AI work at a conference or are YC AI is invited to participate in the meeting, and the related fees are paid by YC AI.

We are also happy to provide visa support. If you’re ready to discuss visa-related matters, feel free to contact me and I can connect you with Kirsty at YC.

If you have any questions, please feel free to contact me – I’m always willing to chat! Looking forward to working together :).

gdb (Note: Greg Brockman's initials

Re: Phone content follow-up (OpenAI disclosure) Sender: Elon Musk Recipient: Greg Brockman Time: November 22, 2015 19:48 (Sunday)

The blog content looks good, but it needs to be adjusted to a neutral perspective and not too biased YC.

I would like to pivot the blog in a direction that is more engaging to the public - having public support is very important to our success - and prepare a longer, more detailed, more professional blog for recruiting. version, with a link to it at the end of the public version.

We need to come up with a number much larger than $100 million, otherwise we won't be able to do that.We would be uncompetitive against the spending of Google or Facebook. I recommend announcing a $1 billion funding commitment. It's practical and I take on the parts that others fail to provide.

The template is generally good, just adjust the default reward to a cash bonus distributed in installments, and allow conversion to equity in YC, or possibly SpaceX equity (the specific amount needs to be further confirmed).

Draft of opening paragraph

Subject: Draft of opening paragraph From: Elon Musk To: Sam Altman Time: December 8, 2015 9:29 (Tuesday)

It is very important to write the summary at the beginning well. This is what everyone reads first and is what the media usually quotes. It was released to attract top talent. I'm not sure Greg really understands that.

——

OpenAI is a non-profit AI research company whose goal is to promote the development of digital intelligence in a way that is most likely to benefit all mankind, without being bound by the pursuit of financial returns. .

The basic philosophy of our company is to spread AI technology as widely as possible as an extension of everyone's will, ensuring that under the guidance of a free spirit, the power of digital intelligence is not overly concentrated and directed towards It reflects the future evolution expected by all mankind.

The results of this project are uncertain and the pay is lower than at other companies. But we believe that our goals and structure are correct. We want this to be what the best people value most.

Re: Draft of opening paragraph From: Sam Altman To: Elon Musk Time: December 8, 2015 10:34 (Tuesday)

How about this?

——

OpenAI is a non-profit AI research company whose goal is to promote the development of digital intelligence in a way that is most likely to benefit all mankind, without being bound by the pursuit of financial returns. .

Because we have no financial pressure, we can focus on developing AI technologies that have the greatest and most positive impact on humanity, and disseminate them as widely as possible. We believe that AI should be an extension of the will of the individual and be guided by a spirit of freedom that avoids concentrating it in the hands of a few.

The results of this project are uncertain and the pay is lower than at other companies. But we believe that our goals and structure are correct. We want this to be what the best people value most.

Just got the news...

Subject: Just got the news... Sender: Sam Altman Recipient: Elon Musk Time: 2015 12 11:30 on November 11th (Friday)

I just got the news that DeepMind will provide high-priced offers to everyone at OpenAI tomorrow in an attempt to suppress us.

YouDo you mind if I proactively increase everyone's pay by $100,000 to $200,000 a year? I think they are all motivated by the mission, but adding money would send a positive message to everyone that we are here to care and take care of them for the long term.

It seems that DeepMind is planning to start a war. They even blocked people directly at NIPS (Note: the top academic conference in the field of AI).

Re: Just got the news... Sender: Elon Musk Recipient: Sam Altman Time: December 11, 2015 (Friday)

Did Ilya give Give a definite answer?

If someone shows the slightest hesitation, I'm happy to call them personally. I've told Emma that this is my top priority 24/7.

Re: Re: Just got the news... Sender: Sam Altman Recipient: Elon Musk Time: December 11, 2015 12:15 (Friday)

Yes, already committed, just got his commitment.

Re: Re: Re: Just got the news... Sender: Elon Musk Recipient: Sam Altman Time: December 11, 2015 12:32 (Friday)

Awesome

Re: Re: Re: Re: Just got the news... From: Sam Altman To: Elon Musk Time: December 11, 2015 12:35 (Friday)

Everyone feels very good and is still saying something like "Although I asked DeepMind to quote a good price, unfortunately, they do not have the value support of 'insisting on doing the right thing'".

News (note: OpenAI is established) will be released at 1:30 pm PST

OpenAI Inc.

New participants:

< p>Pamela Vagata, former AI research engineer at Facebook, resigned in 2016

Vicki Cheung, Duolingo, healthcare company TrueVault Former engineer, resigned at the end of 2017

Diederik Kingma, who had not graduated with a PhD at the time, resigned in 2018

Andrej Karpathy ), who had not yet graduated with a Ph.D., left in 2017 to join Tesla, returned to OpenAI in 2023, and left again in 2024

John D. Schulman , before graduating with a Ph.D., he resigned in 2024

Trevor Blackwell, former YC partnerPerson, left in 2017

Subject: OpenAI Company From: Elon Musk To: Ilya Sutskvi, Pamela Vargata, Vicki Zhang, Diadri K. Kimma, Andrei Kapasi, John Shulman, Trevor Blackwell, Gregg Brockman Cc: Sam Altman Date: December 11, 2015 16:41 (Friday)

Congratulations to everyone on taking a great first step!

We are far smaller in numbers and resources than some well-known institutions, but it is important that we are on the side of justice. I think we have a great chance.

The most important thing we need to consider is recruiting top talent. The achievements of any company are ultimately the result of the joint efforts of team members. If we can continue to attract the most talented people and have the same direction and goals, then OpenAI will succeed.

So, please think carefully about who should join us. If I can help with recruiting or anything else, please feel free to let me know. I recommend focusing especially on people who have not yet completed graduate school or even undergraduate studies, but are clearly very smart. Better to get them on board before they achieve breakthrough results.

Looking forward to working with you all,

Elon

2016: Building the Team - Compensation Structure - Microsoft's multi-million dollar order - Musk's objection

Congratulations on the success of Falcon 9

Topic: Forward: Congratulations on the success of Falcon 9 (OpenAI Disclosure) Sender: Elon Musk Recipients: Sam Altman, Ilya Sutskvi, Gregg Brockman Time: January 2, 2016 8:18 (Saturday)

[Forward Email]

——

Hi Elon,

Happy New Year, ███! (Note: █ is the information hidden by OpenAI, the same below)

Congratulations on the Falcon 9 landing, this is really an amazing achievement, now it’s time to start building the fleet!

Recently I have seen you (and Sam and other OpenAI people) being interviewed frequently, praising the advantages of open source AI, but I think you also know that this is not a panacea. There's no magical solution to security problems, right? In fact, there is a lot of strong evidence that the approach you are taking could be very dangerous and could even increase the risk to the world. Some of the obvious points are well covered in this blog post that I'm sure you've already seen, but there are some other important factors worth considering: http://slatestarcodex.com/2015/12/17/should -ai-be-open/

I would love to hear your counter-arguments to these points.

Re: Forward: Congratulations on the success of Falcon 9 (OpenAI disclosure)

Sender: Ilya Sutskvi

Recipient: Elon Musk, Ilya Suzkovi, Gregg Brockman

Time: January 2, 2016 9:06 (Saturday)

Article The focus is on "rapid breakthrough" Scenario: If a rapid breakthrough occurs, and developing a secure AI is harder than developing an insecure AI, we open source everything, potentially making it easy for a malicious actor with massive hardware resources to develop an insecure AI, and this AI will break through quickly.

As we get closer to developing AI, it will make more sense to be less open. In the context of OpenAI, “open” means that once the AI ​​is developed, everyone can benefit from it. But it's perfectly acceptable not to share results (although sharing all results is definitely the right strategy for recruiting purposes in the short and even medium term).

Re: Re: Forward: Congratulations on the success of Falcon 9 (OpenAI disclosure) Sender: Elon Musk Recipient: Ilya Sutskvi Time: January 2, 2016 9: 11 (Saturday)

That’s right.

Follow-up Thoughts

Subject: Follow-up Thoughts (OpenAI Disclosure) From: Ilya Sutskvi To: Elon Musk Cc: Greg Brockman , Sam Altman Time: February 19, 2016 10:28 (Friday)

A few notes:

Solving the "concept" problem does not mean that we can obtain AI. Other issues that need to be addressed include unsupervised learning, transfer learning, and lifelong learning. Our current performance in language understanding is also quite poor. This does not mean that significant progress will not be made on these issues in the coming years. But we can’t say that there’s just one problem standing in the way of achieving fully human-level AI. We can't develop AI today because of a lack of key ideas (or maybe computers are too slow, but we don't know for sure yet). Powerful ideas come from top talent. Large-scale computing clusters do help and are well worth the investment, but the effect is relatively small. Over the next six to nine months, we're on track to achieve traditionally significant results because we already have a very good team in place. Achieving breakthrough results that change an entire field is harder, riskier, and takes longer. But we also have sound plans to address these challenges. Re: Follow-up Thoughts (OpenAI Disclosure) From: Elon Musk To: Ilya Sutskvi Cc: Greg Brockman, Sam Altman Time: February 19, 2016 Sunday 12:05 (Friday)

FranklyI'm surprised it took the AI ​​community so long to figure out these concepts. This doesn't seem too difficult. Connecting large numbers of deep networks together at a high level sounds like the right approach, or at least a key part of the right approach.

The probability of DeepMind creating deep intelligence increases every year. Maybe in 2 to 3 years it won't be more than 50%, but it will probably be more than 10%. Considering their resources, the idea doesn't seem crazy to me.

In any case, I think it is much better to overestimate your competitors than to underestimate them.

This does not mean that we should be in a hurry to recruit people who are not good enough. Nothing good can come of this. What we need to do is redouble our efforts to find the best talent in the world, attract them to the company by any means necessary, and give them a high sense of urgency.

In the next 6 to 9 months, OpenAI must achieve some important results to prove that we are truly capable. It doesn’t have to be a major breakthrough, but it should at least be enough for key talent around the world to notice and become interested.

Compensation Framework

Subject: Compensation Framework From: Greg Brockman To: Elon Musk Cc: Sam Altman Time: February 2016 11:34 on the 21st (Sunday)

Hello everyone,

We are conducting the first round of full-time recruitment since our establishment. This is obviously very important as this will have long-term implications for the future. I'm not quite sure yet about making the decision alone and would like some guidance.

The following is our current arrangement:

Founding team: Annual salary $275,000 + 0.25% of YC equity

You can also choose to permanently convert the annual salary to a fixed annual salary of $12.5 A $10,000 bonus, or equivalent equity in YC or SpaceX. I'm not sure if anyone picked this one up.

New hires: $175,000 annual salary + $125,000 year-end bonus or equivalent equity in YC or SpaceX. Bonus is based on performance evaluation and may be 0% or greater than 100%.

Special case: Greg + Elijah + Trevor

Our plan is to keep the base salary roughly the same and use floating bonuses to reward employees who perform well.

Some notes:

The equity vests within 8 years. We use an annualized conversion of 20%. The bonus of US$125,000 is equivalent to 0.12% of YC equity. So the final value is about $750,000. While this number sounds more attractive, valuations are difficult to pinpoint. The founding team will each receive an initial annual salary of $175,000. The day after the lab launched, we proactively increased everyone’s salary by $100,000 and told them weFully support the lab financially when it succeeds and ask them to commit to ignoring all offers and trusting us to take care of them. We are currently interviewing Ian Goodfellow from Brain (Note: Google's research department), one of the top two scientists in our field (the other is Alex from DeepMind) Alex Graves). He's the best guy at Brain, so Google will definitely be gunning for him. We plan to include him in the founding team’s compensation package.

Some salary data:

John's total annual salary package at DeepMind is $250,000, and he thinks he can easily negotiate $300,000. FAIR (Facebook AI Lab) verbally offered Wojciech (Wojciech Salemba) $1.25 million per year (but there was no specific written information). Andrew Tulloch makes $800,000 per year at Facebook (mostly in stock, still vesting). Ian Goodfellow's current salary at Google is $165,000 in cash + $600,000 in stock per year. Apple was in a bit of a hurry to hire, offering $550,000 in cash (plus stock, and presumably additional compensation). However, I don't think any good person would accept it.

Two specific candidates I'm currently eyeing:

There's a good chance Andrew will accept our offer, but he's concerned the salary gap is too big. Ian said salary is not his biggest concern, but the cost of living in the Bay Area is high and he hopes to be able to afford a home. I'm not sure what would happen if Google offered him what they offered Elijah.

My questions now are:

1. I expect Andrew will try to raise the salary. Should we stick with the current offer and tell him he can only join if he's very willing to accept the salary (and tell him others have given up on higher incomes as well)?

2. Ian will interview and (I believe) receive an offer on Wednesday. Should we consider his offer as final, or should we adjust based on Google's offer?

3. Based on questions 1 and 2, I am thinking whether this strategy of balancing salary is feasible. If we continue with the status quo, I think we may need to place a special emphasis on the value of floating bonuses. Perhaps consider using signing bonuses as leverage to attract talent?

4. While not a big deal, our intern salaries are below market: $9k per month. (Facebook offers $9,000 + free housing, Google offers a monthly salary of about $11,000). For interns, salary is notMost importantly, experience is the key. But I think we may have lost some good candidates as a result. Considering that interns make significantly less per hour than full-time employees, should we consider raising wages?

Feel free to discuss these issues at any time.

gdb

Re: Compensation Framework From: Elon Musk To: Greg Brockman Cc: Sam Altman Time: February 22, 2016 Sun 0:09 (Monday)

We must go all out to secure top talent. Just raise the salary. If at some point it becomes necessary to restructure existing employees’ salaries, that’s no problem.

We can either attract the best talent in the world or be defeated by DeepMind.

I fully support attracting top talent at all costs.

DeepMind is causing me great psychological stress. If they win, the consequences will be dire, and their values ​​of "world domination" worry me very much.

Obviously, they are making significant progress, after all, the talent over there is so strong.

Re: Re: Compensation Framework From: Greg Brockman To: Elon Musk Cc: Sam Altman Time: February 22, 2016 0:21 (Monday )

I understand, I fully understand what you mean. The plan has been made. I'll continue to work with Sam on the specifics, so please feel free to let me know if you want an update.

gdb

Wired magazine interview

New participant:

Sam Teller, then Musk's top assistant

Subject: Wired magazine interview From: Greg Brockman To: Elon Musk Cc: Sam Taylor Time: March 21, 2016 0:53 (Monday)

Hi Elon,

I was interviewed by Wired about OpenAI and the fact-checkers asked me some questions. I want to sync up with you on two points in particular to ensure that the answer is reasonable and consistent with your position:

1. Will OpenAI make all research public?

At any time, we will take actions that are most likely to maximize the benefit to the world. In the short term, we believe the best approach is to make our findings public. But this may not be the best approach in the long term: for example, some potentially dangerous technologies may not be suitable for immediate sharing. In any case, we will share all results of all our research freely, with the hope that these results will benefit the entire world, rather than being concentrated in a particular institution.

2. Does OpenAI believe that letting as many people as possible master the most advanced AI technology is the best way to prevent overly powerful AI from falling into private hands and threatening the world?

We believe that using AI to expand individual will is the most promising path to ensuring that AI always benefits mankind. The appeal of this approach is that when there are many agents of equal ability, they can balance each other and prevent the influence of a single bad actor. But we won’t claim to have all the answers: rather, we’re building an institution that can find those answers and take the best action whatever they are.

Thank you!

gdb

Re: Wired magazine interview Sender: Elon Musk Recipient: Greg Brockman Cc: Sam Taylor Time: March 2016 6:53 on January 21 (Monday)

No problem.

Questions about Maureen Dowd

New participant:

Maureen Dowd, then-New York Times Columnist

Alex Thompson, then assistant to Maureen Dodd

Subject: Questions about Maureen Dodd From: Sam Taylor Recipient: Elon Musk Time: April 27, 2016 7:25 AM (Wednesday)

[Forward Alex Thompson’s email]

——

Hi Sam,

Hope you Have a great day and I'm sorry to bother you again. I wanted to confirm if Maureen had access to Mr. Musk's response to some of Mr. Zuckerberg's public comments. In particular, Zuckerberg called Musk's concerns about AI "exaggerated" and criticized "alarmists" for their statements about the dangers of AI. I've included details of Mr. Zuckerberg's comments below:

When asked recently in Germany about Musk's concerns, Zuckerberg called them "overblown" and praised AI breakthroughs, including a system. He claimed that the system could use a mobile phone to diagnose whether skin lesions were cancer with an accuracy comparable to “the best dermatologists”.

"Unless we really screw up," he said, machines will always be subservient to humans without becoming "superhuman."

“I think we can develop an AI that works for us and helps us... Some people play up AI as a huge threat, but I think this concern is too far-fetched, Far less than the disasters caused by disease, violence, etc.” on Facebook in April. At the developer conference, he summarized his philosophy as: "Choose hope, not fear."

Alex Thompson

"The New York Times"

Re: Question about Maureen Dodd From: Elon Musk To: Sam Taylor Time: April 27, 2016 12:24 (Wednesday)

History clearly shows that any powerful Technology is a doubleBladed sword. It would be foolish to assume that AI—arguably the most powerful of all technologies—has only one side.

Microsoft’s recent AI chatbot is an example of how it can quickly have a negative impact in a short period of time. It would be wise to be cautious about the arrival of AI and ensure that its power is widely distributed and not controlled by any one company or individual.

That’s why we founded OpenAI.

Microsoft Escrow Agreement

Subject: Microsoft Escrow Agreement From: Sam Altman To: Elon Musk Cc: Sam Taylor Time: September 16, 2016 14:37 (Friday)

The following are Microsoft's terms: Use $10 million to purchase $60 million worth of computing resources, and we can also make recommendations on what they deploy on the cloud. If you have any feedback please let me know.

Sam

Microsoft/OpenAI Terms

Microsoft and OpenAI: Accelerating the Development of Deep Learning on Azure and CNTK

This "Unbound The "Term Sheet" (hereinafter referred to as the "Term Sheet") was jointly prepared by Microsoft Corporation ("Microsoft") and OpenAI ("OpenAI") and sets out the terms of the potential commercial partnership between the two parties. This term sheet is for discussion purposes only and does not address all matters that need to be agreed before entering into a legally binding commercial agreement (the "Commercial Agreement"). The existence and content of this term sheet, and all discussions related thereto, are confidential information as defined and governed by the non-disclosure agreement (“NDA”) between the parties dated March 17, 2016. Except for confidentiality obligations, this term sheet is not binding in itself.

Purpose of cooperation

OpenAI is focused on promoting the development of deep learning in a way that benefits mankind. Microsoft hopes to collaborate with OpenAI to accelerate the development of deep learning on Microsoft Azure. To this end, Microsoft will provide Azure computing resources to OpenAI at a discounted price so that OpenAI can effectively advance its mission.

Cooperation goals

Microsoft

Accelerate the development of deep learning on Azure and attract the next generation of developer groups to jointly promote and promote the development of deep learning on Azure

OpenAI

p> Get heavily discounted GPU computing resources during the agreement period (3 years): $60 million worth of computing resources for just $10 million for its non-profit research advocacy and promotion of OpenAI use Azure

Participants (legal entities): Microsoft, OpenAI

Proposed agreement signing date: September 19, 2016

Proposed agreement effective date: Signed with the agreementDate

Legal document drafter: Microsoft

Cooperation period: 3 years

Engineering terms

Computing resources: Microsoft will provide OpenAI with the agreed price Provide GPU computing resources to enable them to run tasks on Azure. Geography: Microsoft will determine the location of computing resources at its sole discretion based on capacity and availability and provide OpenAI with a deployment strategy and timeline. Service Level Agreement: Microsoft guarantees that when OpenAI deploys two or more instances of virtual machines in the same Availability Set, at least one will be available 99.95% of the time. Microsoft will abide by the service level agreement on the official Azure website. [Breaked link] Evaluate, promote, and use CNTK v2, Azure Batch, and HD-Insight: OpenAI will evaluate CNTK v2, Azure Batch, and HDInsight for research and provide recommendations for improvements. OpenAI has partnered with Microsoft to promote these tools across its research and developer ecosystem and to use Microsoft Azure as the public cloud platform of choice. OpenAI may adopt these products at its sole discretion if appropriate for research. Scaling plan: Microsoft and OpenAI are working together to develop a scaling plan to balance capacity across clusters. The initial expansion timeline is a minimum of 30 days and will be adjusted based on Microsoft's future capacity expansion plans. Capacity Allocation: OpenAI will acquire capacity for a trial cluster located in the central South United States in the short term. The K80 GPU cluster that went online in the fourth quarter of 2016 will provide quota for OpenAI, and further expansion of capacity is expected in the first quarter of 2017 (calendar year).

Financial Terms

Microsoft will provide $60 million worth of computing resources (including GPUs) at a significant discount, and OpenAI will pay $10 million over the life of the partnership. If OpenAI uses less than $10 million in computing resources at the end of the partnership period, OpenAI will pay the difference in price for the unused portion.

Marketing and PR Terms

Microsoft and OpenAI have committed to jointly promote the deep learning capabilities of the Azure platform and have reached the following consensus:

Ignite: The partnership was announced at the Microsoft Ignite event and will be signed by both parties. Executives (OpenAI’s Sam Altman and Microsoft’s Satya Nadella) witnessed the launch of the collaboration. PR: Microsoft and OpenAI will jointly issue a joint press release about the partnership and produce promotional materials such as blog posts and videos. Re: Microsoft Escrow Agreement Sender: Elon Musk Recipient: Sam Altman Cc: Sam Taylor Time: September 16, 2016 15:10 (Friday)

This is really disappointing IFeeling sick. This is bullshit and exactly what I expected from them.

Evaluate, promote, and use CNTK v2, Azure Batch, and HD-Insight: OpenAI will evaluate CNTK v2, Azure Batch, and HDInsight for research and provide recommendations for improvements. OpenAI has partnered with Microsoft to promote these tools across its research and developer ecosystem and to use Microsoft Azure as the public cloud platform of choice. OpenAI may adopt these products at its sole discretion if appropriate for research.

Suffice it to say, we are willing to let Microsoft donate idle computing resources to OpenAI and let the outside world know about it, but we do not want to sign any contracts or agree to "promote". They can end support at any time and we can quit at any time.

Re: Re: Microsoft Escrow Agreement Sender: Sam Altman Recipient: Elon Musk Cc: Sam Taylor Time: September 16, 2016 15:33 (Friday)

I had the same reaction when I read that part, and they have agreed to delete it.

We were initially just hoping to get donations of idle computing resources, but the team wanted to make sure there were enough. I will talk to Microsoft and make sure there are no strings attached.

Re: Re: Re: Microsoft Escrow Agreement Sender: Elon Musk Recipient: Sam Altman Cc: Sam Taylor Time: September 16, 2016 (Friday) < p>We should keep a low profile on this matter. Make no promises and sign no contracts. Re:Re:Re:Re:Microsoft Escrow Agreement Sender: Sam Altman Recipient: Elon Musk Cc: Sam Taylor Time: September 16, 2016 18:45 (Friday)

Okay, I will see how much resource support I can get in this direction.

Re:Re:Re:Re:Re:Microsoft Escrow Agreement Sender: Sam Taylor Recipient: Elon Musk Time: September 20, 2016 20:05 (Tuesday)

On the premise that "OpenAI will make the sole decision and strive to ensure integrity", Microsoft is now willing to reach an agreement totaling $50 million, and both parties can terminate the cooperation at any time. There are no promotional requirements, no strings attached, and no marketing tools that make us look like Microsoft. Is it ready to move forward?

Re:Re:Re:Re:Re:Re:Microsoft Escrow Agreement Sender: Elon Musk Recipient: Sam Taylor Time: September 21, 2016 0:09 (Wednesday) < p>As long as they don't actively mention it in their promotions, I'm fine with it. That's worth more than $50 million if it avoids looking like a "marketing tool" from Microsoft.

2017 (Part 1): Chief Scientist's Bi-Weekly Briefing - Determining AGI Research Plan - Preliminary Results

Biweekly Briefing

Subject: Biweekly Briefing (OpenAI Disclosure) Sender: Ilya Sutskvi Recipients: Greg Brockman, Elon Musk Time: June 12, 2017 day 22:39 (Monday)

This is the first of our bi-weekly newsletters. Our goal is to keep you informed and help us make better use of your opportunities with the company.

Compute Resources:

"Compute Resources" are used for two purposes: one is to run large experiments quickly, and the other is to run multiple experiments 95% in parallel. The progress comes from the ability to run large experiments quickly, and running multiple experiments in parallel has been much less useful in the past. A large cluster can help you run more experiments, but it cannot help you run a single large experiment quickly. Academic labs used to be able to compete with Google because Google's only advantage was the ability to run multiple experiments in parallel. Recently, it became possible to combine hundreds of GPUs with hundreds of CPUs. Run an experiment 100 times larger than a single machine The result is that the minimum computing cluster size is 10 to 100 times larger than before, thanks to the efforts of many different teams. The small 1v1 mode developed by the neural network strategy uses more than 1000 cores per Dota experiment. Just to win the 1v1 mode, we need more computing resources to win the full 5v5. mode, we just run fewer experiments, but each experiment is at least an order of magnitude larger (probably more!). Summary: The key is the scale and speed of our experiments. In the past, a large cluster could not run quickly for anyone. Larger experiments. Today, with large clusters, we can run a large experiment 100 times faster. To theoretically complete our project, we need to increase the number of GPUs in the next 1 to 2 months. 10x (we already have enough CPU). We will discuss the specific details in the face-to-face meeting.

Dota 2:

We will solve the 1v1 version of the game in 1 month. Fans of the game are very concerned about the 1v1 mode. Stage: *Single Experiment* Consuming thousands of cores, adding more distributed computing power can improve performance. Here's a cool video showing our bot doing some pretty clever stuff: [Link Broken]

Learning new games quickly. :

Infrastructure work is ongoing and we have reached several baselines. Fundamentally speaking, weIt has not reached the ideal state yet and is being adjusted.

Robotics:

Current status: The HER algorithm ([link broken]) can quickly learn to solve many previously unsolvable low-dimensional robotics tasks. It's not obvious, but it's simple and effective. Within 6 months, we will use the HER algorithm and the sim2real method (such as [link failed]) to complete at least one of the following goals: solving the Rubik's Cube with one hand, rotating a pen ([link failed]), and fitness ball (Chinese ball rotation) ([link failed]) invalid]). The above tasks will be applied to robot hands: [Google Drive link] [This video is controlled by humans, not algorithms. An OpenAI account is required to view the video].

Self-play as the key path to AGI:

Self-play in a multi-agent environment is magical: if you put agents into an environment, no matter how smart (or not) they are, The environment will give them just the right challenges, and they can only deal with them by defeating their competitors. For example, if you have a group of children, they will feel that they are in competition with each other; the same will be true for a group of superintelligent agents with similar intelligence. So the “solution” to self-play is to get smarter and smarter, without boundaries. Self-playing allows us to “create something from nothing”. The rules of a competitive game can be simple, but the optimal strategy for playing the game can be extremely complex. (Motivating example: [Link Broken]). Training an agent in a simulated environment can develop very strong flexibility using confrontational combat (such as wrestling). Here is a video of an ant robot we trained fighting: <****> Current work on self-play: Let an agent learn to develop a language. (animation in [broken link]) The agent is doing “work in progress”, which is still a work in progress.

We have some more cool little projects coming up and will provide updates when we get big results.

Re: Biweekly Bulletin Sender: Elon Musk Recipient: Ilya Sutskvi Cc: Greg Brockman Time: June 12, 2017 22:52 ( Monday)

Thanks, this is a great update.

Re: Re: Bi-Weekly Briefing Sender: Elon Musk Recipient: Ilya Sutskvi Cc: Greg Brockman Time: June 13, 2017 10:00 24 (Tuesday)

Okay. Let's find the cheapest way to ensure that computing resources don't become a bottleneck...

Things to do with AGI

Subject: Things to do with AGI From: Ilya Sutskvi Recipient: Elon Musk, Greg Brockman Time: July 12, 2017 13:36 (Wednesday)

We always think of problems as difficult because smart people have already worked on them For a long time, but without success. peopleIt’s easy to think that the same is true for AI. However, progress over the past five years has shown that the earliest and simplest ideas about AI—neural networks—were right all along, and that we needed modern hardware to make them work.

Historically, AI breakthroughs have always occurred with models that took 7 to 10 days to train. This means that hardware defines the potential boundaries of AI breakthroughs. This says more about human psychology than it does about AI. If the experiment takes longer than this, it will be difficult to keep all the states in mind and iterate and improve repeatedly. If the experiment time is shorter, you can just use a larger model.

Advances in AI are not entirely a hardware game, just as physics is not entirely a particle accelerator game. But no amount of smart people will be able to achieve AGI if our computers are too slow, just like we won’t be able to figure out how the universe works if our particle accelerators are too small. A fast enough computer is a requirement, and all past failures were probably due to computers not being fast enough to support AGI implementations.

Until recently, there was no way to combine many GPUs to speed up experiments. Therefore, in terms of "effective computing power", academia and industry are the same. But earlier this year, Google improved its classifier architecture using two orders of magnitude more computing resources than usual, which typically requires a significant amount of researchers' time. A few months ago, Facebook published a paper showing how to train a large ImageNet model on 256 GPUs with near linear acceleration (provided that a specially configured cluster with a high-bandwidth interconnection network is used).

Google Brain has achieved impressive results over the past year because they have one or two orders of magnitude more GPUs than other institutions. We estimate that Brain has about 100,000 GPUs, FAIR has about 15,000 to 20,000 GPUs, and DeepMind allocates 50 GPUs to each researcher and rents 5,000 GPUs from Brain for AlphaGo. Apparently, when someone runs a neural network on Google Brain, it eats up everyone's quota at DeepMind.

We are still missing several key ideas needed to develop AGI. How do we use the system's understanding of "thing A" to learn "thing B" (e.g., can I teach a system to count first, then multiply, then solve word problems)? How do we develop systems that are curious? How do we train a system to discover the underlying causes of various phenomena—making it work like a scientist? How do we develop a system that is not precisely trained to adapt to new situations (e.g., in unfamiliarapply familiar concepts in context)? But if you have enough hardware and can spend 7 to 10 days running relevant experiments, history shows that you can find the right algorithm, just like physicists with a large enough particle accelerator, they can quickly figure out how the universe works. Works the same.

There is good reason to believe that deep learning hardware will accelerate 10x every year for the next 4 to 5 years. The world has become accustomed to the relatively leisurely pace of Moore's Law and is unprepared for the upheaval in capabilities that this hardware acceleration will bring. This acceleration happens not because the chips have smaller transistors or faster processing speeds. Rather, like the brain, neural networks are inherently parallelizable, and the highly parallel hardware in development will do just that.

In the next 3 years, robotics should be completely solved, AI should solve a long-unproven theorem, steadily win programming competitions, and convincing chatbots should emerge (although not Which one can pass the Turing test). In just 4 years, every overnight experiment may use huge computing power. If we have the right algorithm, people may wake up to the arrival of AGI - in the next 2 to 4 years, they may be fighting against multi-intelligence. Find out this algorithm in volume simulation experiments.

To work on developing secure AGI, OpenAI needs to:

Achieve the best AI results every year. Especially as hardware performance increases exponentially, we need to achieve better results. With the current level of computing power, our DOTA and Rubik's Cube projects will achieve impressive results. Next year’s projects will break through the norm, but it mainly depends on how much computing power we have. Increase our GPU cluster from 600 to 5000 GPUs as soon as possible. The high-end estimate calls for $12 million in capital expenditures and $5 million to $6 million in operating expenses next year. Every year in the future, we will need to double our investment in hardware. But we have reason to believe that the final hardware cost required to implement AGI will not exceed $10 billion. Scaling our team: 55 people in July 2017, 80 people in January 2018, 120 people in January 2019, and finally 200 people in January 2020. We've learned how to organize our current teams, and the bottleneck now is mainly that there aren't many smart people who can try out new ideas. Lock in overwhelming hardware advantages. <****> said that he can develop a computing power chip equivalent to TPU 3.0 within 2 years, which (if the quantity is sufficient) can make our computing power equal to Google. The design of Cerebras (Note: an AI chip company founded in 2015) is far ahead of both of them. If Cerebras’ vision comes true, having its exclusive supply will put us significantly ahead of our competitors. After more due diligence, we have a complete idea of ​​how to do this, best discussed over the phone.

2/3/4 will ultimately require a lot of money. If we can raise the money, we have the opportunity to set the initial conditions for AGI to come to life. Funding needs will grow as results scale. We need to discuss options for obtaining relevant funding, which is currently the biggest uncontrollable factor.

This week's progress:

We have defeated our strongest 1v1 test player (he is ranked in the top 30 1v1 players in North America and can beat the number one 1v1 player in North America with about 30% win rate) . But robots can still be used in some special ways. We are studying these vulnerabilities and working to resolve them. There will be another match on Saturday, the first in which we beat the top test players: [Link Broken] With every additional day of training, the bot becomes stronger and harder to exploit. Robots make progress solving Rubik's Cube problem. Improved simulation of solving the Rubik's Cube, remotely controlled by humans: <****> Our research on defense against adversarial samples (note: data that confuses AI systems) will completely solve the problem of adversarial samples before the end of August.

SMS exchange record

New participant:

Shivon Zilis, joined OpenAI in 2016 and Tesla in 2017 as project director , and then continued to be active in OpenAI, while helping Musk supervise Neurlink and other companies; joined the OpenAI board of directors in 2019; gave birth to twins for Musk in 2021; and withdrew from the OpenAI board of directors in 2023.

SMS exchange record (disclosed by OpenAI)

Participants: Greg Brockman, Shivon Ziris

Time: July 13, 2017 Day (Thursday)

OpenAI Note: Greg sent Shivon Zilis (she is the liaison between Musk and OpenAI) the highlights of the day's meeting with Musk.

Shivon Zilis (22:35):

How is it going?

Greg Brockman (22:35):

It’s going great! !

Ocean: Agreed to announce during TI The International (Note: Dota 2's largest annual event); he (Note: Elon) suggested playing against the best players from the winning team, which I think is a great idea Great. I asked him to call <****> and he said he would. I think this went better than our original plan, which was to announce ahead of time that we had beaten the best human 1v1 player and then show off our bots on a device at Ti.

GPUs: Say push for what we need

Cerebras: We briefly discussed the idea of ​​a reverse merge. After Cerebras, the conversation turned to a discussion of organizational structure (he said that nonprofits were indeed the right choice early on, but may not be the best choice now—a sentiment that Ilya and I agree with for multiple reasons). He said he was going to Sun Valley to ask <****> for a donation.

Shivon Zilis (22:43):

<****> and others. I will work hard to fight for you.

Biweekly Bulletin

Subject: Biweekly Bulletin Sender: Ilya Sutskvi Recipients: Elon Musk, Gregg Brockman Time: 2017 July 20, 13:56 (Thursday) The robot hand has been able to solve the Rubik's Cube in a simulated environment:

[Web link] (OpenAI login required)

Physical machine is expected to be 9 Yue can also complete the same task

1v1 robot has no easy-to-exploit vulnerabilities

Now it cannot be defeated with "non-traditional" strategies

It is expected to defeat all humans within 1 month

p>Sports competition robot

[Web link] (requires OpenAI account login)

released a confrontation example that can deceive the camera from all angles at the same time:

[Link invalid]

DeepMind Directly use our algorithm to generate parkour results:

DeepMind's results: https://deepmind.com/blog/producing-flexible-behaviours-in-simulated-environments/

DeepMind's technical paper It was explicitly mentioned that they used our algorithm directly.

Blog post about our algorithm: [Broken link] (DeepMind is using an older version).

Next plan:

Design a profit-making structure

Negotiate merger terms with Cerebras

Conduct more due diligence on Cerebras

2017 (Part 2): Musk proposed a transformation of the profit-making structure - fighting for control - Altman won

< p>Beijing Hope 2030 Making AI in China by 2030Subject: Beijing hopes to make AI in China by 2030 (OpenAI disclosure) Sender: Elon Musk Recipients: Greg Brockman, Ilya Sutskvi Time: July 21, 2017 3:34 (Friday)

They will try their best to get what we developedthing. This could be another reason to change course.

[News report link]

Re: Beijing hopes to manufacture AI in China by 2030 From: Greg Brockman To: Elon Musk Cc: Ilya ·Sutzkevi Time: July 21, 2017 13:18 (Friday)

100% agree. We believe the development path must be:

AI research non-profit organization (by the end of 2017)

AI research + hardware for-profit company (starting in 2018)

Government Project (Time:??)

-gdb

Re:Re:Beijing wants to make in China by 2030 AI Sender: Elon Musk Recipient: Greg Brockman Cc: Ilya Sutskvi Time: July 21, 2017 13:18 (Friday)

Our Week Let’s chat on Saturday or Sunday. I have a preliminary plan to discuss with you.

Tomorrow Afternoon

Subject: Tomorrow Afternoon (OpenAI Disclosure) From: Elon Musk Recipients: Greg Brockman, Ilya Sutskvi, Sam ·Altmann CC: <****>, Shivon Zilis Time: August 28, 2017 00:01 (Monday)

Can you meet or chat on the phone tomorrow afternoon?

It’s time for OpenAI to take the next step. This is the triggering event. (Note: OpenAI’s AI defeated the strongest human player in a Dota 1v1 match.)

OpenAI Notes

Subject: OpenAI Notes Sender: Shivon Ziris Recipient: E Ron Musk CC: Sam Taylor Time: 00:01, August 28, 2017 (Monday)

Elon ,

As I mentioned before, Greg proposed discussing something this weekend. Ilya also joined the discussion and they basically shared everything they were thinking. Here’s a summary of that conversation, organized into 7 unanswered questions below, and I’ve included their comments. Please note that I am not endorsing any of this, just collating and sharing what I have heard.

1. Short-term control structure?

Do you need absolute control? They wondered if they could set up some kind of creative "veto clause" that would intervene when almost everyone else (not just the three of them [Note: Gregg, Ilya, and Altman] but also Maybe a wider board)?

2. Duration of control and transition arrangements?

Non-negotiable clause, it seems that an unbreakable agreement must be reached to ensure that when AGI is created, it will never be owned absolutely by one person.control. This means that no matter what happens to the three of them, it must be guaranteed that after 2-3 years, control of the company will be dispersed.

3. Time invested?

How much time does Elon plan to devote, and how much time can he actually devote? What is the time frame? Is it one hour a week, ten hours a week, or something in between?

4. How to arrange the time invested?

They weren’t sure how Elon spent his time at other companies and how he wanted to spend his time here. Greg and Ilya were confident they could handle the software/machine learning side of things, but not so much on the hardware side. They want Elon to invest time in that area because it's their weak spot, but also want him to help in all the areas that interest him.

5. What is the ratio of time investment to control?

They can accept less time/less control, or more time/more control, but not less time/more control. They worry that if they invest too little time, they won't be able to fully discuss relevant information and make good decisions.

6. Equity distribution?

Greg's instinctive preference is for equal distribution. I personally disagree, and he seeks and is willing to hear other perspectives to adjust his thinking. Greg noted that Ilya had actually contributed millions of dollars by giving up the opportunity to make money at Google. They worry that the team is too small.

7. Financing strategy?

Their instinct was to raise more than $100 million right out of the gate. They think the cost of the data center alone is going to be that much, so they tend to raise more money.

Conclusion:

Not sure if any of this is feasible, but based on all the data points they were thrown at, the following seems to allay their current concerns:

Put in 5-10 hours a week, Have near-total control, or invest less time and have less control. In extreme cases, create a creative short-term veto for someone other than Greg/Sam/Elijah. Regardless of Greg/Sam/Elijah's situation, secure a 2-3 year Limited Control Agreement. Initial funding of $200 million to $1 billion. Greg and Elijah's stake ended up being slightly more than 1/10th of Elon's (that's still vague). Grow your team. Re: OpenAI Notes From: Elon Musk To: Shivon Ziris Cc: Sam Taylor Time: August 28, 2017 00:08 (Monday)

This is so annoying. Please encourage them to start a company. I've had enough.

SMS communication record

SMS communication record (OpenAI disclosure)

Participants: Greg Brockman, Shivon Zilis

Time: September 4, 2017 (Thursday)

OpenAI Note: In the previous six weeks, Brockman and others negotiated terms with Musk on the for-profit organization. Musk asked for a majority stake. Musk said in a phone call that he was not concerned about equity but about amassing $80 billion to build a city on Mars.

Greg Brockman (20:19):

Actually, I'm a little confused about the details of the proposed equity ratios and board control.

It sounds like Elon will always get the most control (3 seats or 25%), with all power vested in the board of directors.

Shivon Zilis:

Yes. My guess is that he was going to give you a veto clause initially, but I'm not sure.

Greg Brockman:

What kind of power does it have to own a specific percentage of equity?

It sounds like there will be a permanent board of directors, or at least board members selected from a specific group.

So I'd love to hear the specifics. Also I guess with an even number of board members, 50% means no action can be taken?

Shivon Zilis:

I think it will grow to at least 7 people soon. The question is not that, but when, if at all, it will transition to a normal board.

It sounds like he is not willing to budge from holding 50% to 60% of the shares, so there is no point in discussing whether to have majority control.

Re: Current Status (OpenAI Disclosure) From: Elon Musk To: Ilya Sutskvi Cc: Greg Brockman Time: September 13, 2017 12 :40 (Wednesday)

Sounds good. The three common stock director seats (you, Greg, and Sam) should be elected by common stockholders. These seats are effectively yours, unless in the unlikely event that you lose the trust of a majority of the common stockholders over time, or you voluntarily leave the company.

I think the Series A preferred stock (of which I am in the majority) should have the right to appoint four (not three) director seats. I would not want to appoint these directors immediately. Like I said, I will have control in the early stages of the company, but that will change quickly.

The initial goal is to form a board of directors of 12 people (if this board really wants to decide the fate of the world, it may reach 16 people). Each director must have a deep understanding of technology, at least on AI. Have basic understanding and a strong and sensible moral outlook.

In addition to the four seats for the Series A and the three seats for the common stock, each new lead investor/ally may have one director seat. However, the addition of a new board member cannot be opposed by two or more board members. The same rules apply to removing board members.

We would also like to addSome independent directors who are not related to the investors. The same rules apply: An addition or removal cannot be opposed by two or more board members.

I'm too tired and don't want to make things too complicated. This arrangement seems basically right. If there were 16 people on the board, we would have 7/16 voting rights and I would have 25% control, which is my bottom line. To me, that sounds about right. If all the board members we invite turn out to be against us, then we probably really should accept defeat.

As mentioned, in my experience with boards (assuming the board is made up of good, smart people), they are all rational and reasonable. There will be very few serious confrontations in which a single director's vote determines the winner, so these (voting rights arrangements) will almost certainly (hopefully) become a non-issue.

Finally, I feel very comfortable and confident in discussing equity and board of directors matters with you.

If you have any ideas about the above arrangements, please tell me directly.

Elon

OpenAI Note: Musk said that he wants to be the CEO of OpenAI. On September 15, 2017, Musk arranged for his subordinates to establish a for-profit company called Open Artificial Intelligence Technologies, Inc.

Candid Thoughts

Subject: Candid Thoughts From: Ilya Sutskvi To: Elon Musk, Sam Altman Cc: Greg ·Brockman, Sam Taylor, Shivon Zilis Time: September 20, 2017 14:08 (Wednesday)

Elon, Sam,

This is the most important conversation Greg and I have ever been involved in, and if the project succeeds, it will be the most important conversation the world has ever witnessed. . At the same time, this is a deep and personal conversation for each of us.

Yesterday, as we were considering our final commitment, we realized we had made a mistake. We have some important concerns that we have not raised with you. We didn’t bring it up because we were afraid: afraid of hurting our relationship, afraid that you would think less of us, or that we would lose the opportunity to work with you.

Our concerns may be irresolvable. We sincerely hope not, but we know that if we don't discuss these issues now, we will certainly fail. We also believe that if we can work together, there is still hope to solve these problems and continue to cooperate.

Elon:

We really want to work with you. We believe that if we join forces, our chances of success will be greatest and our potential will be greatest, no doubt about it. Our desire to work with you is so strong that we are willing to give up equity, personal control, and even allow ourselves to be fired at any time—justWe will pay whatever it takes to be able to work with you.

But we realized that we had been thinking too hastily about the impact of control on the world. It seems that we are so arrogant that we do not seriously consider the impact that success may have.

The current architecture provides a path for you to eventually have unilateral absolute control of AGI. You said you don't want to control the final AGI. But in this negotiation, you showed us that absolute control is extremely important to you.

For example, you say you need to be the CEO of your new company so everyone knows you are in charge, even though you also say you hate being a CEO and would rather not be one.

So we worry that as companies make real progress on AGI, you will choose to retain absolute control, even if you say you don't want to now. We disagree with your statement - that our ability to choose to leave the company is our greatest power - because once the company truly goes AGI, the company will be more important than anyone else.

OpenAI’s goal is to make the future better and avoid AGI dictatorship. You worry that Demis (Demis Hassabis, founder of DeepMind) might create an AGI dictatorship. So do we. So, creating a structure where you can become a dictator is obviously a bad idea. In particular, we can also create an alternative architecture that avoids this possibility.

We have some smaller concerns, but we thought it made sense to bring them up here:

If we decide to acquire Cerebras, I feel strongly that this will be through Tesla Finish. But why use Tesla if we can do it with OpenAI too? Specifically, the problem is that Tesla has a duty to maximize returns for shareholders, which is inconsistent with OpenAI's mission. Therefore, the end result may not be in OpenAI’s best interest.

We believe that OpenAI is successful as a nonprofit because you and Sam are involved. Sam really acts as a counterweight to you, which is very effective. At least so far, Greg and I are much worse at balancing you. We think that was reflected even in this negotiation: We were almost ready to shelve the issue of long-term AGI control, and Sam stood his ground.

Sam:

When Greg and I were stuck, you always had insightful and correct answers. You have thought very deeply and thoroughly about the solution to this problem.

Greg and I understand technical execution, but we don't know how architectural decisions will play out in the next month, year, or five years.

But in this process, we cannot completely trust your judgment because we do not understand your weighing logic.

We don’t understand why the title of CEO means so much to you.This is important. The reasons you mention keep changing and it's hard for us to understand what's driving it.

Is AGI really your main target? How does it relate to your political ambitions? How have your ideas changed over time?

Greg and Elijah:

We also had our share of failures in this negotiation, and we’ll list a few here (Elon and Sam, we’re sure you’ll have many Addendum...):

During this negotiation we realized that we were letting the financial return 2-3 years from now influence our decision. That's why we're holding out on the control issue - we feel the equity is good enough, why worry about it? But this attitude is misguided, as is the case with AI experts who believe that AI safety is not an issue because they simply don’t believe they will develop AGI.

We did not tell the full truth in the negotiations. While we have all the excuses, this behavior hurts the entire process and we may lose the support of Sam and Elon as a result.

There are enough problems at present, so we think it is very necessary to meet and talk things over. If we don't do this, our collaboration will not be successful. Can the four of us meet today? If we all speak our truth and solve problems, the company we create will be more likely to withstand the powerful challenges it will face in the future.

Greg & Ilya

Re: Candid Thoughts From: Elon Musk To: Ilya Sutskvi Cc: Sam Altman, Greg Brockman, Sam Taylor, Shivon Zilis Time: 14:17 (Wednesday), September 20, 2017

Folks, I've had enough. I can't bear it anymore. Either you go do something on your own, or you continue to run OpenAI as a non-profit. I will no longer fund OpenAI until you make a clear commitment to stay at OpenAI. Otherwise I will be a fool, funding your business in vain. The discussion ends here.

Re: Re: Candid Thoughts From: Elon Musk To: Ilya Sutskvi Cc: Sam Altman, Greg Brockman, Sam Taylor , Shivon Ziris Time: September 20, 2017 15:08 (Wednesday)

To be clear, this is not an ultimatum asking you to accept what has been discussed previously. Those things are no longer optional.

Re: Re: Re: Candid Thoughts From: Sam Altman To: Elon Musk, Ilya Sutskvi Cc: Greg Brockman, Sam ·Taylor, Shivon Zilis on Thursday, September 21, 2017 at 9:17 pm

I am still passionate about nonprofit architecture!

Nonprofit

New participant:

Holden KanoffHolden Karnofsky, co-founder of Open Philanthropy, an American non-profit organization focused on research and funding. Donated $30 million to OpenAI in 2017, became a director, and later quit

Subject: Nonprofit organization From: Shivon Ziris To: Elon Musk Cc: Sam Taylor Time: 2017 Friday, September 22nd at 9:50am

Hi Elon,

Just a quick update, Greg and Elijah said they want to keep the nonprofit structure going. They understand that for this structure to work, they need to provide a guarantee that (you) will not do otherwise.

I haven't spoken to Altman yet, but he said we could do so this afternoon and I'll report back on anything I hear.

If there is anything I can do to help, please let me know.

Re: Nonprofit organization From: Elon Musk To: Shivon Ziris Cc: Sam Taylor Time: September 22, 2017 10:01 (Friday)

OK

Re: Re: Nonprofit From: Shivon Ziris To: Elon Musk Cc: Sam Taylor Time: September 22, 2017 17:54 (Friday)

From Altman:

Architecture: Very willing to remain a non-profit and continue to support.

Trust: He admits that he lost a lot of trust with Greg and Elijah in the process. He felt that their messaging was often inconsistent and sometimes seemed childish.

Take a vacation: Sam tells Greg and Elijah that he needs 10 days off to think. He needs to figure out how much he trusts them and whether he wants to continue working with them. He said he would come back in 10 days and decide then how much time he wanted to devote.

Financing: Greg and Ilya believed that hundreds of millions of dollars could be raised through donations if there was a clear direction for the effort. Sam thought raising tens of millions of dollars was feasible, but more funding was unclear. He mentioned that Holden was dissatisfied with the move to a for-profit structure, so more funding may be provided if OpenAI maintains a non-profit structure, but no clear commitment has been made. Sam then mentioned that, optimistically, the amount could reach $100 million.

Communication: Greg and Elijah informed all members of the team's every development throughout the entire process, leaving Sam dissatisfied. He thought it distracted the team. On the other hand, he's somewhat satisfied that nearly everyone has been told over the past day that there won't be a for-profit structure, because he wants the team to be able to get back to work.

Xi Feng

2018: Consider issuing a currency - OpenAI Charter- Musk quits the board

ICO

Subject: ICO From: Sam Altman To: Elon Musk CC: Greg Brockman, Ilya Sutskvi, Sam Taylor, Shivon Ziris Date: January 21, 2018 17:08 (Sunday)

Elon —

As a reminder, I've talked to some members of the security team and they have a lot of concerns about ICOs and the unintended impacts that may arise in the future. (Note: ICO, Initial Coin Offering, is a company or project issuing blockchain-based digital tokens to raise funds from investors.)

I plan to discuss it with the entire team tomorrow and solicit opinions. I would emphasize the need for confidentiality, but I think it is important for everyone to be included in the discussion early and have their say.

Sam

Re: ICO Sender: Elon Musk Recipient: Sam Altman Cc: Greg Brockman, Ilya Sutskvi , Sam Taylor, Shivon Zilis Time: January 21, 2018 17:56 (Sunday)

Completely agree

The current situation of top AI institutions

Topic: Top AI Institutional Status Sender: Andrei Kapasi Recipient: Elon Musk Cc: Shivon Ziris Time: January 31, 2018 13:20 (Wednesday)

ICLR Meeting ( This is the top conference focusing on the field of deep learning. Although NIPS is larger, the research topics are more dispersed.) It has just announced its acceptance and rejection decisions. Someone made a nice chart showing the current distribution of deep learning/AI research. While this is not a perfect measure, as not all companies prioritize publishing papers, it is of some relevance.

Here is a chart showing the total number of papers by each institution (categorized by oral presentations, poster presentations, workshops, and rejections):

To put it simply, Google ranked with 83 Paper submissions dominate. Academic institutions (Berkeley/Stanford/CMU/MIT) follow closely behind, with between 20 and 30 submissions each.

I just think this is an interesting aspect of where the current concentration of research activity is. The complete data can be seen here: [Link Broken]

-Andre

Re: The current state of top AI institutions From: Elon Musk To: Greg Boo Rockman, Ilya Sutskvi, Sam Altman CC: Sam Taylor, Shivon Ziris, Andrei Kapasi Time: January 31, 2018 14:02 (Wednesday)

Compared to Google, OpenAI Obviously we are on a losing road.. Without immediate and drastic action, everyone but Google will become irrelevant.

I have researched ICO and will not support it. In my opinion, this will only bring OpenAI and everyone associated with ICOs into disrepute. If something seems too good to be true, it probably isn't. In my opinion, this is an unwise departure.

The only way out I can think of is to significantly expand OpenAI and Tesla's AI division. Maybe both can be done simultaneously. For OpenAI, this will require a significant increase in endowment funding and attracting very credible people to the board. The current board situation is terrible.

I will arrange time to discuss with you tomorrow. To be clear, I have great respect for your abilities and achievements, but I am dissatisfied with the current management approach. This is one of the reasons why I’ve struggled to get deeply involved with OpenAI in recent months. Either we solve the problem and I increase my time commitment, or if we don’t solve it, I quit almost completely and publicly reduce my involvement in OpenAI. I will not accept a mismatch between my impact and my actual time commitment.

Re: Re: Current status of top AI institutions Sender: Elon Musk Recipient: Andrei Kapasi Time: January 31, 2018 14:07 (Wednesday)

FYI

What do you think is more reasonable? If it would be more convenient to communicate by phone, I would be happy to do so.

Re:Re:Re:The current state of top AI institutions (OpenAI disclosure) From: Greg Brockman To: Elon Musk Cc: Ilya Sutskvi, Sam Altman, Shivon Zilis Time: January 31, 2018 22:56 (Wednesday)

Hi Elon,

Thank you for these thoughtful thoughts. I've always admired your ability to see the big picture, and I completely agree that to achieve our goals we must change our current course. Let’s talk tomorrow, any time from 4pm onwards.

I believe that to create the best possible future, OpenAI needs to scale massively. Our goals and mission are fundamentally sound, and this will become increasingly important as AGI gets closer.

FUNDFUNDING

Our fundraising exchanges showed that:

Ilya and I were able to convince reputable people that AGI was actually possible within the next 10 years and that they would be able to achieve their goals for donations. Interest Their interest in investing is very high

I respect your decision on the ICO idea, which is also in line with our ideas. Sam Altman has been working on a financing structure that doesn't rely on an IPO, and we'd love to hear your feedback.

From those who have been communicating with us, here are my current recommendations for board members. I'd also like to hear from youRecommend top candidates who are not on this list and we can work out how to reach them.

The next 3 years

In the next 3 years, we must build three things:

Customized AI hardware (such as <****> computers) Large-scale AI data center (May require multiple iterations) The best software teams, striking a balance between algorithm development, public demonstration, and security

What we discuss most is custom AI hardware and AI data centers. On the software side, we have a credible path (to let multiple agents play against themselves in a competitive environment) that has been proven in Dota and AlphaGo. We also discovered a few but real limitations in today's deep learning that prevent models from learning from human-level experience. And we believe we are uniquely on the path to solving the security problem (at least in the broadest sense), with results expected within the next 3 years.

We hope to grow the team as follows:

Beginning of 2017: ~40 people End of 2018: 100 people End of 2019: 300 people End of 2020: 900 people

Moral high ground

Our greatest tool is the moral high ground. To maintain this, we must:

Do our best to remain non-profit. AI will shake the fabric of society, and we should be responsible for all of humanity. Put more effort into safety/control issues rather than the pretense you see at other agencies. It doesn't matter who wins if everyone dies. Related to this, we need to convey a "better red than dead" attitude - we are working hard to develop safe AGI, and we are not willing to destroy the world in a fierce competition. Engage with governments to provide credible, unbiased policy advice – we often hear that they don’t trust advice from companies such as Recognize that we are an organization that creates public value for the research community and encourage other participants to be honest and open by leading by example.

The past 2 years

I would love to hear how you would evaluate our performance over the past 2 years given the resources available. Here's my take:

In the past 5 years, the industry has only had two important practical system demonstrations: AlphaZero [DeepMind] and Dota 1v1 [OpenAI]. (There are many more "technological breakthroughs" that have attracted widespread attention among researchers, the most representative of which include: ProgressiveGAN [NVIDIA], unsupervised translation [Facebook], WaveNet [DeepMind], Atari/DQN [DeepMind], machine Translation [Ilya, former Google, now OpenAI], generative confrontationNetwork [Ian Goodfellow, developed as a student, now Google], variational autoencoder (VAE) [Durk Kingma, developed as a student, now OpenAI], AlexNet [Ilya, student Developed during the period, now OpenAI ]). By that standard, we're doing pretty well. We experienced rapid expansion in 2016 and gradually developed an effective management structure in 2017. Now as long as we have enough resources, we can expand massively. At present, we do experience brain drain due to low wages, but it is basically limited to salary issues. I'm returning to my early recruiting methods and I believe the results will be better than before. Our team has the reputation for having the highest talent density in this area. We discourage writing papers, so paper acceptance rate is not a metric we want to improve. Regarding the ICLR chart Andre sent me, I would expect our (number of accepted papers)/(number of people submitting papers) ratio to be the highest in the field.

-gdb

Re: Re: Re: Current status of top AI institutions Sender: Andrei Kapasi Recipient: Elon Musk Time: January 31, 2018 23:54 (Wednesday)

Unfortunately, working on the AI ​​frontier is expensive. For example, DeepMind’s operating expenses in 2016 were approximately $250 million (excluding computing power costs). As their team expands, it's now probably around $500 million per year. But Alphabet had net profits of about $20 billion in 2016, and even with no revenue from DeepMind, those expenses would still be relatively low for Alphabet's overall financial health. In addition to DeepMind, Google also has Google Brain, Research and Cloud. Then there's TensorFlow, TPU, which account for about a third of the AI ​​research space (in fact, they host their own AI conference).

I also have a strong feeling that computing power will be a necessary condition (perhaps even a sufficient condition) for achieving AGI. If historical trends are any guide, advances in AI are driven primarily by systems—computing power, data, and infrastructure. The core algorithms we use today have had few significant changes since the 1990s. Not only that, any algorithm progress published in a paper can almost immediately be reproduced and integrated into existing systems. On the other hand, if pure algorithmic progress is not supported by sufficient scale, it will be powerless and unable to achieve amazing results.

In my opinion, OpenAI is currently burning cash, and the existing funding model cannot achieve the scale to truly compete with Google (an $800 billion company). likeIf you can't really compete with them, but you continue to research in an open way, you may actually make the situation worse and help them "for free". Because any progress is easy for them to replicate and quickly integrate at scale.

Moving to a for-profit model could potentially create an ongoing revenue stream over time and, with the strength of the current team, could attract significant investment. However, building a product from scratch would distract from AI research, would take a long time, and it’s unclear whether the company can “catch” Google’s scale. And investors may be applying too much pressure in the wrong direction.

I mentioned before that I think the most promising option is for OpenAI to be attached to Tesla and use Tesla as a "cash cow". I believe that being attached to other large companies (e.g. Apple? Amazon?) will fail because the DNA of the company is incompatible. To use a rocket analogy, Tesla has built the “first stage” out of the Model 3’s entire supply chain, on-board computers, and constant internet connectivity. “Level 2” will be fully autonomous driving solutions based on large-scale neural network training, and OpenAI’s expertise can significantly speed up this process. If we could launch a fully functional fully autonomous driving solution in about 2-3 years, we could sell a lot of cars and trucks. If we do a very good job and the transportation industry is big enough, we can definitely increase Tesla's market value to about $100 billion or more and use that revenue to fund AI research at an appropriate scale.

I see no other way to reach Google-level capital scale in 10 years and be sustainable.

-Andre

Re:Re:Re:Re:The current state of top AI institutions Sender: Elon Musk Recipients: Ilya Sutskvi, Greg ·Brockman Time: February 1, 2018 3:52 (Thursday)

[Forward Andre's email]

Andre is absolutely right. While we might wish things were different, in Andre and I's minds, Tesla is the only path that might even remotely compete with Google. Even so, the chances of becoming a rival to Google are slim. Just not zero.

AI Update

New participants:

Adam D'Angelo, founder of Quora, November 2023 with Yili Ya and others jointly ousted Altmann and are still on the OpenAI board

Subject: AI update Sender: Shivon Ziris To: Elon Musk Cc: Sam Taylor Time: 2018 March 25 11:03 (Sunday)

OpenAI

Financing:

No more ICO or “advance purchase of computing power” methods. Altman is designing a new approach to get 4 to 5 large companies interested in OpenAI to invest, with returns capped at 50x if OpenAI eventually achieves some kind of profitable AGI. It is said that these companies seem to be willing to participate just to obtain relevant research results. He would like to discuss this plan with you in more detail.

Officially resigning from the Board of Directors:

According to the regulations, you are still on the Board of Directors and need to send a brief email to Sam Altman with content similar to: "With this email, I officially resign as a director of OpenAI. Effective February 20, 2018 ”

Future Board of Directors:

Altmann said that he did not mind if I joined the board and later quit due to conflicts of interest, but he was worried that others would take me. Exiting is regarded as "ending the road to retreat". I think the best option right now is not to join the board yet, but to participate in OpenAI in a vague advisory capacity. If you have a different opinion, please let me know. They're considering Adam D'Angelo to take your spot, does that seem okay?

Tesla AI

Andre is considering three candidates, one or two of whom may come to meet with you on Tuesday. He will send you their newsletter. In addition, he is working on a first draft of an article for possible publication and can discuss it together on Tuesday. This article will follow the "full-stack AI lab" angle we discussed before, but if you feel that this direction is not suitable, please adjust in time... Information communication is indeed tricky.

Cerebras

They plan to test the chip in August and plan to open remote access to others in September. The guy at Cerebras also mentioned that a lot of customers have been interested in them recently because they were unhappy with Nvidia's changes to its terms of service (which forced the company to switch from consumer-grade GPUs to enterprise-grade Pascal/Volta). Scott Gray (OpenAI engineer) and Ilya continue to work closely with them.

OpenAI Charter

Subject: OpenAI Charter From: Sam Altman To: Elon Musk Cc: Shivon Ziris Time: April 2, 2018 13:54 (Monday)

We plan to publish this next week - any suggestions?

——

OpenAI Charter

OpenAI’s mission is to ensure universal AGI) — by which we mean performance in the most economically valuable creative work Highly autonomous systems beyond humans – capable of benefiting all of humanity. We will try to directly develop safe and beneficial AGIs. If our work helps others achieve this goal, IWe will think that our mission is accomplished. To this end, we commit to the following principles:

Broad distribution of benefits

We commit to using any influence we have to deploy AGI to ensure that it is used to benefit all mankind and to avoid AI or AGI is used to harm humans or to overconcentrate power.

Our first responsibility is to human beings. We anticipate mobilizing significant resources to accomplish our mission. But we will always do our best to reduce conflicts between employees and stakeholders so as not to hinder the realization of wider benefits.

Long-term safety

We are committed to conducting the necessary research to ensure the safety of AGI and promoting the widespread application of such research throughout the AI ​​community.

We are concerned that late-stage AGI development could turn into a competition without adequate security measures. Therefore, if a safety-focused project that aligns with our values ​​comes close to developing AGI before we do, we commit to stop competing against it and instead assist that project. We will develop specific protocols on a case-by-case basis, but the trigger may generally be "more than 50% probability of success within the next 2 years".

Technical Leadership

To effectively address the impact of AGI on society, OpenAI must be at the forefront of AI technical capabilities—policy and safety advocacy alone are not enough.

We believe that AI will have a widespread impact on society before AGI emerges. Therefore, we will strive to provide leadership in those areas relevant to our mission and professional capabilities.

Collaboration orientation

We will actively cooperate with other research institutions and policy institutions and strive to create a global cooperative community to jointly address the challenges posed by AGI.

We are committed to providing public resources that will help society respond to the development of AGI. Currently, this includes publishing the majority of our AI research results, but we anticipate that in the future there may be less public release of traditional research results due to security and confidentiality concerns, and a greater emphasis on sharing research results related to safety, policy, and standards. .

Re: OpenAI Charter Sender: Elon Musk Recipient: Sam Altman Time: April 2, 2018 14:45 (Monday)

No problem.

AI Update (continued)

New contributors:

Reid Hoffman, co-founder of Linkedin. Huffman withdrew from the OpenAI board of directors in March 2023 due to his investment in OpenAI competitor Inflection AI.

Gabe Newell, game developer and OpenAI donor.

Subject: AI Update (continued) Sender: Shivon·Ziris To: Elon Musk Cc: Sam Taylor Time: Monday, April 23, 2018 1:49

Updated based on conversation with Altman. You tentatively plan to meet him on Tuesday.

Financing:

He once again confirmed that they will never use an ICO, but an equity structure with a fixed return cap. This is a fairly unique subsidiary financing structure and he wants to explain it to you in detail. He plans to close the first round of funding in 4-6 weeks (probably led mostly by Reid's money, and possibly some corporate investment).

Technical:

He said Dota 5v5 is performing better than expected. The rapid improvement in the performance of Dota bots has caused concerns among insiders that AGI may be implemented sooner than previously thought. They expect to conquer Montezuma's Revenge soon.

Time Allocation:

Due to your departure from the board and a number of factors, I have reallocated most of my time from OpenAI to Neuralink and Tesla. If you want me to invest more time in OpenAI, please let me know. Sam and Greg asked me if I would join their informal advisory board (currently just Gabe Newell). Less potential conflict, seems like a better fit than director? If you feel it is inappropriate, please tell me your opinion.

OpenAI Update

Subject: OpenAI Update Sender: Sam Altman Recipient: Elon Musk Time: December 17, 2018 15:42 (Monday)

Hi, Elon-

In the first quarter of next year, we plan to hold the last Dota tournament in a completely open game environment, where any professional team can participate to compete for high prizes. After that, we will announce the completion of the model-free RL phase, and some team members will use the model-based RL method to re-conquer Dota 1v1.

Also in the first quarter, we plan to release several robot demonstration projects - Rubik's Cube, Pen Spin Stunt and Medicine Ball. New tasks should be learned very quickly. At the end of the year, we'll try putting one arm on each of the two arms to see what happens...

We're also making rapid progress on language. I hope next year we will be able to generate short stories and develop a good conversational bot.

We hope to develop models through unsupervised learning so that they can complete some difficult tasks. For example, not making mistakes when classifying images that humans never make - in my opinion, this means that the model needs to have some level of conceptual understanding.

We have also made good progress on multi-agent: multiple agents are now able to collaborateBuild simple structures, engage in laser battles, and more.

At the same time, I am advancing a plan to migrate our computing resources from Google to Microsoft (supplementing our own data centers).

In addition, I am also happy to discuss financing (even if we adopt an aggressive growth strategy, our current funds are enough to support the next 2 years of development) and continuous iteration of hardware planning. It’s best not to discuss these two topics via email though. Maybe we can chat in person next time you come to Pioneer?

Re: OpenAI Update Sender: Elon Musk Recipient: Sam Altman Time: December 17, 2018 15:47 (Monday)

No problem, we can Wednesday night Meet in San Francisco.

I think it should be reiterated

Subject: I think it should be reiterated (OpenAI Disclosure) From: Elon Musk To: Ilya Sutskvi, Greg ·Brockman CC: Sam Altman Time: December 26, 2018 12:07 (Wednesday)

If execution and resources are not greatly improved, I estimate that OpenAI and DeepMind/Google The probability of keeping up with the competition is 0%, not 1%. I really wish that wasn't the case.

Even raising hundreds of millions of dollars is not enough. Billions of dollars a year need to be invested now or nothing will happen.

Unfortunately, the future of humanity lies in <****>'s hands.

They do much more than that.

OpenAI reminds me of Bezos and Blue Origin. They are far behind SpaceX and the gap is growing, but Bezos’ ego makes him ridiculously think otherwise! (Note: OpenAI disclosed this email twice, withholding this passage the first time.)

I really hope I am wrong.

Elon

2019 and beyond: OpenAI officially forms a for-profit entity - Musk sent a text message at 3 a.m.

OpenAI

New participants:

Sue Yoon, Google X Robotics Project Leader He resigned from the OpenAI board of directors in 2019.

And Tasha McCauley, co-founder of Fellow Robots, joined forces with Ilya and others to expel Altmann in November 2023, and quit the OpenAI board after failure

Topic : OpenAI From: Sam Altman To: Elon MaSK CC: Sam Taylor, Shivon Ziris Time: March 6, 2019 15:13 (Wednesday)

Elon—

This is a draft of the article we plan to publish next Monday . Is there anything that needs to be added or modified?

Key Points:

We created a “return-capped company” and raised a Series A round of funding led by Reed and Vinod Kholsa. We clearly inform all investors that they should never expect a return. We named Greg chairman of the new entity and me as CEO. We have communicated this structure to potential next round investors and they seem very interested.

On a final note, we are discussing a multi-billion dollar investment and I would love to get your advice when you are available. I'd love to come see you when you come to the Bay Area.

Sam

Draft article

We formed OpenAI LP, a new company with “capped returns” that allows us to rapidly add computing power and talent investments while putting in place checks and balances to achieve our mission.

Our mission is to ensure that artificial general intelligence (AGI) benefits all of humanity, primarily by working to develop secure AGI and sharing the results with the world.

Because investment in computing power in this area is growing exponentially, we need to scale faster than we planned when we first started OpenAI. We expect to need to raise billions of dollars in the coming years to invest massively in cloud computing, attract and retain talented people, and build AI supercomputers.

As a non-profit organization, we are unable to raise funds on this scale. Although we considered moving to a for-profit model, we were concerned that it would take us away from our mission.

So we formed a new company, OpenAI LP, a hybrid for-profit and non-profit company—what we call a “return capped” company.

The underlying philosophy of OpenAI LP is that investors and employees can receive fixed returns if we achieve our mission, which allows us to attract investors and employees through equity like a startup. But the proceeds beyond the fixed return—if we succeed, we will create much greater value than investors or employees are entitled to—will go to the OpenAI nonprofit.

Going forward (in this article and elsewhere), "OpenAI" will refer to OpenAI LP (which currently employs the majority of our staff), and the original entity will be called "OpenAI Nonprofit."

Mission First

We designed OpenAI LP to prioritize our overall mission—to ensure the creation and application of safe and beneficial AGI—over investor returns.

In order to minimize conflicts of interest and mission, OpenAI LP’s primary responsibilityRen is committed to furthering the goals of the OpenAI Charter, and the company is controlled by the OpenAI nonprofit's Board of Directors. All investors and employees sign an agreement clarifying that OpenAI LP prioritizes accountability to the Charter, even if this may involve sacrificing some or all of their financial equity.

Our employee and investor filings begin with this: Principal partners refer to the OpenAI nonprofit (formally known as “OpenAI Inc”); limited partners refer to investors and employees.

Only a few board members can hold a financial interest in a partnership. In addition, limited partner interests may conflict with the nonprofit mission, and any decision regarding the payment of returns to investors and employees may only be voted on by board members who do not hold such financial interests.

Corporate Structure

We also have a provision in our documents that states that the nonprofit retains control of the company. (OpenAI LP's formal name "OpenAI, L.P." is used in the filing)

As noted above, there is a cap on the financial returns for investors and employees (this cap will be discussed in advance with each limited partner) determined through negotiation). Returns above the cap will go to the nonprofit. Our goal is to ensure that, if successful, most of the value we create goes back into the world, so we feel this is an important first step. Returns for investors in the first round are capped at 100 times the amount invested, and we expect return multiples to be capped lower for future rounds.

What OpenAI does

Our daily work remains the same. Today, we believe that focusing on developing new AI technologies, rather than commercial products, is how we create the most value. Our architecture provides flexibility for long-term profitability in the future, but we hope to explore this after implementing secure AGI (however, in the meantime, we are open to some revenue streams that do not affect our focus, such as patent licensing) wait).

OpenAI LP currently has approximately 100 employees, divided into three main directions: technical capabilities (to promote the development of AI system functions), safety and security (to ensure that these systems are consistent with human values), and policy governance (to ensure appropriate governance these systems). The OpenAI nonprofit oversees OpenAI LP's educational programs such as Scholars and Fellows and hosts policy initiatives. OpenAI LP is accelerating the development roadmap started by the OpenAI non-profit organization that has led to breakthrough results in reinforcement learning, robotics and language.

Security

We worry that AGI could cause rapid changes—perhaps machines pursuing the wrong goals set by their operators, bad actors taking control of deployed systems, or economic growth spiraling out of control. Can improve human life. As stated in our Charter, forTo avoid competition that makes it difficult to prioritize security, we are willing to merge with an organization whose values ​​are consistent, even if this may result in reduced or even zero returns for investors.

Participants

The OpenAI non-profit’s board of directors includes OpenAI LP employees Greg Brockman (Chairman and CTO), Ilya Suzkovi (Chief Scientist ) and Sam Altman (CEO), as well as non-employee members Adam DeAngelo, Holden Karnofsky, Reid Hoffman, Sue Yin and Tasha McCauley.

Elon Musk resigned from the OpenAI nonprofit board of directors in February 2018 and is not affiliated with OpenAI LP.

Our investors include Reid Hoffman and Khosla Ventures.

We are on a difficult and uncertain path, but we have designed our architecture to help us positively impact the world as we successfully create AGI. If you want to help us achieve this mission, we're hiring :)!

Bloomberg: The AI ​​research organization co-founded by Elon Musk forms a for-profit unit

Subject: Bloomberg: The AI ​​research organization co-founded by Elon Musk forms a for-profit unit Sender: E Ron Musk Recipient: Sam Altman Time: March 11, 2019 15:04 (Monday)

Please make it clear that I have no financial relationship with the for-profit arm of OpenAI.

——

Elon Musk co-founds AI research organization to form for-profit arm

Bloomberg

San Francisco-based AI research Organization OpenAI, co-founded by Elon Musk and several well-known Silicon Valley entrepreneurs, is forming a for-profit arm to raise more funds. [Click to read the full story. ]

Share via Apple News

Re: Bloomberg: AI research group co-founded by Elon Musk forms for-profit unit From: Sam Altman To: Elon· Musk time: March 11, 2019 15:11 (Monday)

Processing.

SMS communication record

SMS communication record (OpenAI disclosure)

Participants: Sam Altman, Shivon Ziris

< p>Time: Sunday, October 23, 2022

Sam Altman (8:07):

This is from Elon, what have you got? suggestion?

Pictured here is a string of messages sent at 3:06 a.m.: "I'm Elon" "New Austin number" "I'm disturbed to see OpenAI valued at $20 billion.In fact, I’ve provided almost all of the seed, Series A, and most of the Series B rounds. "[News report link]""This is deception and fraud"

You offered him equity before, but he refused, right?

I don't know what he said What does Series A and Series B mean?

Shivon Ziris:

It’s not clear where the problem lies. Maybe it’s because the equity is not obtained, or everyone still feels that it is. This is the institution he initially invested in (called OpenAI), or maybe they just had differences on the direction of development

It was actually offered to him at the time, but he refused. I don’t remember exactly how it was decided in the end. If I remember correctly, you should be there. Asked him directly at some point?

You can call if you want more background information, but I recommend not texting back right away

Sam· Altman (21:13):

Can you take a quick look at it if you have some time?

I understand it feels really bad - when we have a profit cap We offered you equity when you were an entity and you didn't want it at the time, but we'd still be happy to give it to you if you wanted it now

Given the amount of capital we need, and in order to keep it going. “Bringing AGI to humans” We have found no other option except setting a profit cap, and this plan also allows the board of directors to revoke all equity interests when required for safety.

By the way, I personally do not own any equity. None. I'm trying my best to find my way through this tricky situation. Find the balance, always happy to talk to you about how we can do better and show you recent progress.

Sam Altman (22:50):

< p>(I sent)

Reply received: I will be in San Francisco most of the week working on acquisitions Twitter thing. Let's talk on Tuesday or Wednesday.

Shivon Zilis:

Sorry for falling asleep! Okay. OpenAI. The first email released: https://openai.com/index/openai-elon-musk/ The email released by Musk’s indictment: https://www.courtlistener.com/docket/69013420/musk-v-altman /Email released by Musk’s indictment, edited version by lesswrong: https://www.lesswrong.com/posts/5jjk4CDnj9tA7ugxr/OpenAI The second published email and text message records: https://openai.com/index/elon-musk-wanted-an-openai-for-profit/

Keywords: Bitcoin
Share to: