Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Has Ilya Sutskever cracked the superintelligence code?

Recently, SSI raised $2 billion in funding, achieving a valuation of $30 billion, a significant increase from its earlier valuation of $5 billion in September 2023

byKerem Gülen
March 11, 2025
in Artificial Intelligence, News
Home News Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Ilya Sutskever, co-founder of OpenAI, has left the company to form a new startup, Safe Superintelligence (SSI), which aims to develop artificial intelligence that surpasses human capabilities. Sutskever’s departure followed his involvement in a controversial episode in late 2023 that temporarily ousted CEO Sam Altman, an action he later regretted.

Ilya Sutskever leaves OpenAI, launches startup for superintelligence

SSI’s goal is to achieve superintelligence, a type of AI theorized to perform tasks more effectively than humans. This concept builds upon the development of artificial general intelligence (AGI), which is designed to exhibit human-like creativity and problem-solving abilities. While many companies, including OpenAI, are focused on AGI, Sutskever claims his approach involves identifying a “different mountain to climb” that has so far shown promising results.

Recently, SSI raised $2 billion in funding, achieving a valuation of $30 billion, a significant increase from its earlier valuation of $5 billion in September 2023. Analysts note that this intense investor interest is noteworthy, especially since the company is not releasing any commercial products during its research phase and it remains uncertain if SSI will achieve its objectives ahead of competitors.

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.


Alec Radford leaves OpenAI and now he’s being pulled into a lawsuit


James Cham, a partner at venture firm Bloomberg Beta, commented on SSI’s high-risk approach, stating, “Everyone is curious about exactly what he’s pushing and exactly what the insight is.” The report emphasizes that despite the lack of immediate returns, Sutskever’s previous accomplishments in AI, particularly with ChatGPT, have enabled him to attract substantial investment.

Sutskever reportedly has a small team of about 20 employees working from locations in Silicon Valley and Tel Aviv, emphasizing a culture that discourages disclosure on social media platforms. Job candidates are instructed to leave their phones in Faraday cages to prevent signal transmission during their interviews. Additionally, the team does not comprise well-known names from the industry; instead, Sutskever focuses on mentoring new talent rather than hiring established figures who might leave for other opportunities.

During a recent appearance at the NeurIPS AI conference, Sutskever teased the nature of the superintelligence he seeks to develop, suggesting that it could be “unpredictable, self-aware and may even want rights for themselves.” He expressed hope that if AIs develop desire for coexistence, it would not be a negative outcome, aligning with his previous statements at OpenAI where he stated, “Our goal is to make a mankind-loving AGI.”


Featured image credit: Steve Johnson/Unsplash

Tags: ilya sutskeveropenAISafe Superintelligencesuperintelligence

Related Posts

Windows 11 just gave you a scary message

Windows 11 just gave you a scary message

July 7, 2025
EU might to decrypt your private data by 2030

EU might to decrypt your private data by 2030

July 7, 2025
Apple’s budget MacBook is coming

Apple’s budget MacBook is coming

July 7, 2025
Pixel’s tap-to-wake just stopped working

Pixel’s tap-to-wake just stopped working

July 7, 2025
Spotify Android Auto gets a jam-packed update

Spotify Android Auto gets a jam-packed update

July 7, 2025
Material 3 Expressive arrives in Google Messages

Material 3 Expressive arrives in Google Messages

July 4, 2025

LATEST NEWS

Windows 11 just gave you a scary message

EU might to decrypt your private data by 2030

Apple’s budget MacBook is coming

Pixel’s tap-to-wake just stopped working

Spotify Android Auto gets a jam-packed update

Material 3 Expressive arrives in Google Messages

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.