Sam Altman, the President of the seed-stage accelerator Y Combinator, has a thing or two to say about the development of the superhuman machine intelligence (SMI).
In a blog post, this week, he wrote : “The US government, and all other governments, should regulate the development of SMI. In an ideal world, regulation would slow down the bad guys and speed up the good guys—it seems like what happens with the first SMI to be developed will be very important.”
In the past Altman has found technology to be ‘often over-regulated’, he stresses the need for regulation in this particular aspect of technology and offers broad guidelines to go about the same.
Considering that the first serious dangers from SMI will mostly arise when humans and SMI work together, Altman says, “Regulation should address both the case of malevolent humans intentionally misusing machine intelligence to, for example, wreak havoc on worldwide financial markets or air traffic control systems, and the “accident” case of SMI being developed and then acting unpredictably.”
Citing incidents of ‘trust breach’ that the US intelligence and the Government in general has been accused and sometimes found guilty of, in the last couple of years, he believes an separate body must convene to to bring about any action.
Altman pointed out the need for a framework to observe companies and groups capable of carrying out development of SMIs. He also said that through the development stage operating rules must be chalked out wherein the SMI can’t cause any direct or indirect harm to humanity, referencing Asimov’s laws of robotics.
“We currently don’t know how to implement any of this, so here too, we need significant technical research and development that we should start now,” he wrote.
Altman believes that the topic hasn’t yet gained the importance it deserves: “Part of the reason is that many people are almost proud of how strongly they believe that the algorithms in their neurons will never be replicated in silicon, and so they don’t believe it’s a potential threat. Another part of it is that figuring out what to do about it is just very hard, and the more one thinks about it the less possible it seems. And another part is that superhuman machine intelligence (SMI) is probably still decades away, and we have very pressing problems now.”
His concern is another in a string of related disquietude raised by the likes of Elon Musk and Bill Gates.
(Image credit: John Williams, via Flickr)