Safe superintelligence has become a focal point in the AI community, and with the launch of Safe Superintelligence Inc. (SSI) by Ilya Sutskever, the conversation has taken on new dimensions.
Sutskever, who co-founded OpenAI and served as its chief scientist, recently left the company to start this new venture, emphasizing his dedication to the development of AI that not only advances in capability but also maintains stringent safety protocols. This development is significant given Sutskever’s history and contributions to AI, particularly in the areas of AI safety and superintelligent systems.
The company explained what steps it would take to develop AI in a post published on X as follows:
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
— SSI Inc. (@ssi) June 19, 2024
Sutskever’s journey to Safe Superintelligence
Ilya Sutskever’s journey in the AI industry has been marked by remarkable contributions and significant milestones. He co-founded OpenAI, a leading research organization dedicated to ensuring that artificial general intelligence (AGI) benefits all of humanity. During his tenure at OpenAI, Sutskever played a crucial role in advancing AI safety and the development of superintelligent systems. His work alongside Jan Leike on OpenAI’s Superalignment team highlighted his commitment to addressing the challenges posed by increasingly powerful AI.
Sutskever’s departure from OpenAI came after a dramatic fallout with the leadership over the approach to AI safety. This pivotal moment led to the creation of Safe Superintelligence Inc. (SSI), reflecting Sutskever’s unwavering dedication to the cause of AI safety. The new company, co-founded with Daniel Gross and Daniel Levy, aims to tackle AI safety and capabilities in tandem, treating them as technical problems that require revolutionary engineering and scientific breakthroughs.
Establishing the SSI
Safe Superintelligence Inc., or SSI, is a bold move in the AI domain. The company’s mission is singular and focused: To develop a safe superintelligence. Sutskever, along with his co-founders Daniel Gross and Daniel Levy, has articulated a clear vision for SSI. The company aims to avoid the distractions of management overhead and product cycles, ensuring that safety, security, and progress are insulated from short-term commercial pressures.
SSI’s founders bring a wealth of experience and expertise to the table. Daniel Gross, former AI lead at Apple and a startup entrepreneur and investor, adds significant value to the team. Daniel Levy, known for his work on large AI models at OpenAI, complements the team with his technical prowess. Together, they have set up offices in Palo Alto, California, and Tel Aviv, Israel, reflecting a commitment to attracting top-tier technical talent from around the globe.
Daniel Gross: SSI’s helm
Daniel Gross, one of the co-founders of SSI, has been instrumental in shaping the company’s direction and ethos. His background in AI, particularly his tenure at Apple and his experience as a startup entrepreneur, positions him as a key player in SSI’s journey. Gross’s insights and strategic vision are crucial as SSI navigates the complex landscape of AI safety and capability development.
Gross has expressed confidence in SSI’s ability to raise the necessary capital, emphasizing that financial constraints will not impede their mission. This assurance is significant, given the high costs associated with advanced AI research and development. Gross’s role at SSI underscores the collaborative and forward-thinking approach that the company embodies.
Daniel Levy: SSI’s hope
Another integral member of SSI’s founding team is Daniel Levy. Levy’s reputation for training large AI models at OpenAI has established him as a leading figure in AI research. His technical expertise and deep understanding of AI systems are invaluable assets to SSI. Levy’s involvement highlights the company’s commitment to pushing the boundaries of what is possible in AI safety and capability.
Levy’s contributions to SSI go beyond his technical skills. His experience working alongside Sutskever at OpenAI ensures a seamless transition as they embark on this new venture. Levy’s role at SSI is a testament to the collaborative spirit that drives the company’s mission to develop a safe superintelligence.
Focusing on the future
Safe Superintelligence Inc. (SSI) is poised to make significant strides in the AI industry. The company’s clear focus on addressing AI safety and capabilities simultaneously sets it apart from other entities in the sector. By approaching these challenges as technical problems that can be solved through revolutionary engineering and scientific breakthroughs, SSI aims to lead the charge in creating a safe superintelligence.
The company’s strategic decision to avoid distractions from management overhead and product cycles ensures that their efforts remain concentrated on their core mission. SSI’s founders, Ilya Sutskever, Daniel Gross, and Daniel Levy, bring a unique blend of expertise, experience, and vision to the company. Their combined efforts are expected to yield significant advancements in the field of AI safety.
The road ahead
As SSI embarks on its journey, the AI community will be watching closely. The company’s approach to AI safety and capability development reflects a deep understanding of the complexities and challenges involved. By focusing on safe superintelligence, SSI aims to contribute to the responsible advancement of AI technology.
Ilya Sutskever’s decision to launch SSI marks a new chapter in his illustrious career. His commitment to AI safety, as demonstrated through his work at OpenAI and now at SSI, underscores the importance of addressing the risks associated with superintelligent systems. With a strong team and a clear mission, SSI is well-positioned to make a lasting impact in the field of AI safety.
Featured image credit: SSI Inc./X