Tesla and SpaceX CEO Elon Musk has filed a lawsuit against artificial intelligence firm OpenAI and its CEO Sam Altman, alleging they violated the company’s original non-profit mission.
Musk co-founded OpenAI in 2015 with the goal of developing safe artificial general intelligence (AGI) and keeping its research broadly accessible. However, in a lawsuit filed on Thursday, Musk claims OpenAI and Altman have diverged from this vision through commercial partnerships.
Specifically, the suit takes issue with OpenAI’s $13 billion deal to be acquired by Microsoft last year. It argues this undermines the company’s initial pledge to publicly share its research. Musk also cites OpenAI’s growing secrecy around newer generative AI models like DALL-E 2, which have significant commercial value.
The complaint states OpenAI’s for-profit arm has transformed it into “a closed-source de facto subsidiary” of Microsoft focused on profiting from AI rather than ensuring its safe development. It wants a trial to revoke profits from the Microsoft deal and have Altman and co-founder Greg Brockman return funds.
OpenAI did not respond to a request for comment regarding the lawsuit. Musk left OpenAI in 2018 to form his own AI safety startup, Neuralink. The suit argues he retains an interest due to helping conceive OpenAI’s original principles.
It also recalls controversies like Altman briefly being fired last year over reported board concerns about commercialization risks. Musk claims OpenAI has ultimately aligned with those seeking to maximize AI for financial gain rather than address existential risks, as he continues to warn against.
Experts say the lawsuit highlights unresolved debates around balancing open research, business interests and safeguarding new technologies. How courts view claims about changing nonprofit missions could impact partnerships at other AI labs. It also reflects Musk’s ongoing skepticism of allowing large tech firms outsized influence over critical technologies.
The suit raises challenging questions about sustaining independently responsible development of powerful tools like AGI. Its outcome may shape discussions on regulating next-generation AI and ensure its priorities reflect broader societal needs.