AI: Our Immortality Or Extinction
Part 1: The AI Revolution: The Road to Superintelligence
Part 2: The AI Revolution: Our Immortality or Extinction
Written by Tim Urban, based in large part on the work of Nick Bostrom. Urban does a good job explaining the concepts in an engaging way that should be accessible to anyone. (Bostrom's work is amazing, but his book Superintelligence adopts a dry style.)
Note, these were written back in 2015. So we're five years closer to those estimates for when Artificial General Intelligence (AGI) might arrive. 2045? 2075? Nobody's really sure. More recently, we've seen specialized AI (called Artificial Narrow intelligence, or ANI) defeat expert Go players, and devices using Natural Language Processing (like Alexa devices) have become common. These are all stepping stones along a path of an unknown length.
The arrival time, though, doesn't matter nearly so much as what could happen afterward, an outcome which depends entirely on whether we've prepared.
"Our immortality or extinction": Those are the likely outcomes for our species. This is not exaggeration, it is not hyperbole. It's to be taken literally. Tim Urban stresses, repeatedly, that these are the conclusions reached by smart people with solid reputations after spending years thinking about the topic. Our intuition is to reject such an idea as ridiculous. The purpose of Urban's two-part explication is to lead you, step by step, through the reasoning to understand why it isn't ridiculous.
So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong. If a superior alien civilization sent us a message saying, 'We'll arrive in a few decades,' would we just reply, 'OK, call us when you get here–we'll leave the lights on?' Probably not–but this is more or less what is happening with AI. -- Stephen Hawking
Comments
Post a Comment