Luminaries like Stephen Hawking & Elon Musk have expressed their deep concern about ‘the singularity’, a moment in the future when artificial intelligence is able to improve itself. The idea is that at this moment, these intelligences would evolve over the course of minutes not millennia and immediately surpass us. As Stephen Hawking put it:
“If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here—we’ll leave the lights on’? Probably not—but this is more or less what is happening with AI,” (AI == ‘artificial intelligence).
What’s a band of intelligent apes to do? I say ask Eric Ries. His work on Lean Startup suggests a test-driven approach to innovation. First, you have to speak your assumptions aloud in explicit language. Then you consider how you can most expediently create experiments to prove or disprove these assumptions (if they need proving).
Here’s a quick & dirty take on the implicit assumptions around fear of The Singularity:
|If we continue to develop our computing, an intelligence will arise that can improve itself.||I can’t see a way to test this since by definition it involves the unexpected behavior of technology that doesn’t exist. That said, it may not need proving- sooner or later, it seems likely.|
|The artificial intelligence will generally want the same things we want- dominion over the earth to increase and multiply.||While the consequences of this are big, I’m less sure this one is a forgone conclusion. Maybe the AI’s would be content to do their own thing in the ether. Or maybe their utmost desire would be to launch themselves into the larger solar system, galaxy. Why would the earth be so interesting to them?How would you prove or disprove this? I think the Lean Startup might say that as you start to see the emergence of independent intelligence and behavior, you design ways to test the volition of those intelligences. Why do they tend to want? What affects that?Implicit in the testing above is the proposition that the intelligence won’t emerge one one big bang- that it will be observable in some kind of stepwise fashion. I don’t know enough about the field to even speculate on that.|
|The artificial intelligence will see humanity as an impediment to what they want.||No idea about this and its importance is obviously predicated on the assumption above.I’d probably try to test it in the same fashion if that option’s available.|
|The artificial intelligence will act against humanity to reduce or eliminate it.||Ditto above|
What do you think? Is it workable? Or should we all head to the bar?
Elon Musk: Brian Solis via Wikimedia Commons
Stpehen Hawking: Doug Wheller via Wikimedia Commons
Robot: Humanrobo via Wikimedia Commons