Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Y Combinator’s Sam Altman Jumps on AI Regulation Bandwagon

byDataconomy News Desk
March 6, 2015
in Artificial Intelligence, News

Sam Altman, the President of the seed-stage accelerator Y Combinator, has a thing or two to say about the development of the superhuman machine intelligence (SMI).

In a blog post, this week, he wrote : “The US government, and all other governments, should regulate the development of SMI.  In an ideal world, regulation would slow down the bad guys and speed up the good guys—it seems like what happens with the first SMI to be developed will be very important.”

In the past Altman has found technology to be ‘often over-regulated’, he stresses the need for regulation in this particular aspect of technology and offers broad guidelines to go about the same.

Considering that the first serious dangers from SMI will mostly arise when humans and SMI work together, Altman says, “Regulation should address both the case of malevolent humans intentionally misusing machine intelligence to, for example, wreak havoc on worldwide financial markets or air traffic control systems, and the “accident” case of SMI being developed and then acting unpredictably.”

Citing incidents of ‘trust breach’ that the US intelligence and the Government in general has been accused and sometimes found guilty of, in the last couple of years, he believes an separate body must convene to to bring about any action.

Altman pointed out the need for a framework to observe companies and groups capable of carrying out development of SMIs. He also said that through the development stage operating rules must be chalked out wherein the SMI can’t cause any direct or indirect harm to humanity, referencing Asimov’s laws of robotics.

“We currently don’t know how to implement any of this, so here too, we need significant technical research and development that we should start now,” he wrote.

Altman believes that the topic hasn’t yet gained the importance it deserves: “Part of the reason is that many people are almost proud of how strongly they believe that the algorithms in their neurons will never be replicated in silicon, and so they don’t believe it’s a potential threat. Another part of it is that figuring out what to do about it is just very hard, and the more one thinks about it the less possible it seems.  And another part is that superhuman machine intelligence (SMI) is probably still decades away, and we have very pressing problems now.”

His concern is another in a string of related disquietude raised by the likes of Elon Musk and Bill Gates.

Follow @DataconomyMedia

(Image credit: John Williams, via Flickr)

 

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Tags: AcceleratorsAIartificial intelligenceWeekly NewsletterY Combinator

Related Posts

Could CTEM have prevented the Oracle Cloud breach?

Could CTEM have prevented the Oracle Cloud breach?

October 5, 2025
ChatGPT reportedly reduces reliance on Reddit as a data source

ChatGPT reportedly reduces reliance on Reddit as a data source

October 3, 2025
Perplexity makes Comet AI browser free, launches background assistant and Chess.com partnership

Perplexity makes Comet AI browser free, launches background assistant and Chess.com partnership

October 3, 2025
Light-powered chip makes AI computation 100 times more efficient

Light-powered chip makes AI computation 100 times more efficient

October 3, 2025
Free and effective anti-robocall tools are now available

Free and effective anti-robocall tools are now available

October 3, 2025
Choosing the right Web3 server: OVHcloud options for startups to enterprises

Choosing the right Web3 server: OVHcloud options for startups to enterprises

October 3, 2025
Please login to join discussion

LATEST NEWS

Could CTEM have prevented the Oracle Cloud breach?

ChatGPT reportedly reduces reliance on Reddit as a data source

Perplexity makes Comet AI browser free, launches background assistant and Chess.com partnership

Light-powered chip makes AI computation 100 times more efficient

Free and effective anti-robocall tools are now available

Choosing the right Web3 server: OVHcloud options for startups to enterprises

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.