Industrial robots 'dance' at a booth the day before the 2015 China International Industry Fair at National Exhibition and Convention Center on November 2, 2015 in Shanghai, China.

Elon Musk, Peter Thiel and other Silicon Valley heavyweights are set to pour $1bn into a new attempt to protect humanity from artificial intelligence, underscoring the unease with which many leading technologists view recent developments in AI.

The new company, OpenAI, will be a non-profit that conducts research, with the aim that “AI should be an extension of individual human wills”.

Concerns about the future powers of computers have been increasing, particularly as recent breakthroughs in “deep learning” have enabled computers better to recognise patterns and understand speech.

Fears about what robots with artificial intelligence could mean for warfare have also been a focus of many in Silicon Valley, and a petition against autonomous weapons gained support from scientists including Stephen Hawking this year.

OpenAI says its aim is to “advance digital intelligence in the way that is most likely to benefit humanity as a whole” through its research.

The non-profit is co-chaired by Mr Musk, who heads Tesla and SpaceX, and Sam Altman, president of Y Combinator, a start-up incubator. Mr Musk has been outspoken about his concerns over artificial intelligence, and tweeted last year that it was “potentially more dangerous than nukes”.

Funding will come from companies such as Amazon Web Services, Infosys and YC Research, and from individuals including investor Mr Thiel, Mr Musk and Mr Altman. A total of $1bn has been committed, although OpenAI says it will spend “only a fraction” of that over the next few years.

The initiative joins a group of other AI-focused research projects, including Oxford university’s Future of Humanity Institute, and the Machine Intelligence Research Institute in Berkeley.

OpenAI will focus on research that aims to ensure that computers with artificial intelligence do not adversely affect humans.

“It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly,” wrote founding members of OpenAI in a blog post.

The post also emphasised that OpenAI’s research and patents would be published and shared.

“It’s hard to predict when human-level AI might come within reach,” the post stated. “When it does, it’ll be important to have a leading research institution that can prioritise a good outcome for all over its own self-interest.”

The post was written by OpenAI’s research director, Ilya Sutskever, previously a research scientist at Google, and its chief technology officer Greg Brockman, formerly chief technology officer at Stripe.

Jerry Kaplan, computer scientist and author of the book Humans Need not Apply, said the initiative stood out from other research efforts. “What is different here is that these are operationally focused entrepreneurs, who rather than studying the technologies developed by others, are looking to see whether AI technologies can be developed with these broader societal goals in mind.”

Get alerts on Silicon Valley when a new story is published

Copyright The Financial Times Limited 2021. All rights reserved.
Reuse this content (opens in new window)

Follow the topics in this article