Ilya Sutskever, co-founder of OpenAI, has left the company to form a new startup, Safe Superintelligence (SSI), which aims to develop artificial intelligence that surpasses human capabilities. Sutskever’s departure followed his involvement in a controversial episode in late 2023 that temporarily ousted CEO Sam Altman, an action he later regretted.
Ilya Sutskever leaves OpenAI, launches startup for superintelligence
SSI’s goal is to achieve superintelligence, a type of AI theorized to perform tasks more effectively than humans. This concept builds upon the development of artificial general intelligence (AGI), which is designed to exhibit human-like creativity and problem-solving abilities. While many companies, including OpenAI, are focused on AGI, Sutskever claims his approach involves identifying a “different mountain to climb” that has so far shown promising results.
Recently, SSI raised $2 billion in funding, achieving a valuation of $30 billion, a significant increase from its earlier valuation of $5 billion in September 2023. Analysts note that this intense investor interest is noteworthy, especially since the company is not releasing any commercial products during its research phase and it remains uncertain if SSI will achieve its objectives ahead of competitors.
Alec Radford leaves OpenAI and now he’s being pulled into a lawsuit
James Cham, a partner at venture firm Bloomberg Beta, commented on SSI’s high-risk approach, stating, “Everyone is curious about exactly what he’s pushing and exactly what the insight is.” The report emphasizes that despite the lack of immediate returns, Sutskever’s previous accomplishments in AI, particularly with ChatGPT, have enabled him to attract substantial investment.
Sutskever reportedly has a small team of about 20 employees working from locations in Silicon Valley and Tel Aviv, emphasizing a culture that discourages disclosure on social media platforms. Job candidates are instructed to leave their phones in Faraday cages to prevent signal transmission during their interviews. Additionally, the team does not comprise well-known names from the industry; instead, Sutskever focuses on mentoring new talent rather than hiring established figures who might leave for other opportunities.
During a recent appearance at the NeurIPS AI conference, Sutskever teased the nature of the superintelligence he seeks to develop, suggesting that it could be “unpredictable, self-aware and may even want rights for themselves.” He expressed hope that if AIs develop desire for coexistence, it would not be a negative outcome, aligning with his previous statements at OpenAI where he stated, “Our goal is to make a mankind-loving AGI.”
Featured image credit: Steve Johnson/Unsplash