Computer scientist Ilya Sutkever, one of the co-founders of OpenAI, has reportedly raised $1 billion in cash for his new start up Safe Superintelligence (SSI).

The new venture received support by a range of investors, including Andreessen Horowitz, Sequoia Capital, DST Global and SV Angel, and NFDG, an investment partnership run by Nat Friedman and SSI’s chief executive Daniel Gross, according to a report by Reuters.

The news agency said the funds will be used to strengthen the startup’s team, which currently counts ten employees, through the acquisition of computing power that will be used to hire new talent.

The team of researchers and engineers is currently based in in Palo Alto, California, and Tel Aviv, Israel.

Despite the company declined to share its valuation, sources close to the matter said it was valued at $5 billion, said Reuters.

This move highlights the ongoing interest in foundational AI research, despite some general waning in investment interest towards funding AI focused start-ups as investors have recently been shying away from investing in longer term returns opportunities.

“It’s important for us to be surrounded by investors who understand, respect and support our mission, which is to make a straight shot to safe superintelligence and in particular to spend a couple of years doing R&D on our product before bringing it to market,” Gross told Reuters.

SSI’s ethos is to develop safe AI systems that significantly surpass human capabilities.

AI safety focuses on preventing accidents, misuse, or other harmful consequences arising from artificial intelligence systems. 

It encompasses machine ethics and AI alignment, which aim to ensure AI systems are moral and beneficial, as well as monitoring AI systems for risks and enhancing their reliability. The field is particularly concerned with existential risks posed by advanced AI models.

California state has recently been pushing a bill, called, AB 3211 , which has found opposition among Silicon Valley technologists including Microsoft and Open AI.

This bill requires AI developers to conduct security tests on many of the most advanced AI models that cost more than $100 to develop or a defined amount of computing power, and to hire third-party auditors to assess their safety practices.
It also requires AI software developers working in the state to outline solutions which can turn off AI models in the event of an error.

AB3211 found opposition in April by a trade group representing Adobe and Microsoft amongst other giant tech firms. However, these doubts were revoked after amendments were made to the bill.

More recently, Open AI supported AB 3211, a California bill that requires labelling of AI-generated content, seeking to fight disinformation and promote transparency amid worries about AI’s influence over US general elections.


Share.
Exit mobile version