There’s been some great threads going around inspired by the book Superintelligence: Paths, Dangers, Strategies, including Elon Musk hoping we’re not just a biological bootloader. Via Automattician Matt Mazur I came across this fantastic review of the book on Amazon that gives a great counter-balance and lots of additional information you wouldn’t get from the book itself, and also summarizes it quite well.
That was a fascinating review. I would say that I’d like to read the book, but, realistically, that isn’t going to happen. The reading list is too long, already! Anyway, I’m afraid that technological progress cannot be stopped. Look at the atom bomb, for example. If the Americans had not built it, some other country would have. And if the United States had not used it once, I believe that it would have been used eventually, with more disastrous effects, at a time when the technology was possessed by more than one country. Fortunately, humans have had the wisdom (if only barely) to avoid using nukes in anger since 1945.
Unlike atomic weapons, AI is a type of technology which, once reaching a critical point, will be completely out of our control. It will make its own decisions. Can it be stopped now? Before it’s too late? I doubt that we could come to a worldwide agreement to stop development of artificial intelligence before it rears its ugly head and forces us to do so?
Can I at least hope that the Terminators look like Summer Glau?
AI is quite scary, considering the average human being will shit on his neighbour or the nearest animal for profit. And we are developing ‘intelligence’. We’ll need to develop some personal intelligence ourselves first!!
Before you even get a little wierded out and worried about AI coming in and attacking us in our sleep, I have been increasing nervous about the stupid machines getting only a little smarter and what that implies. CGP Grey puts together the argument pretty succinctly in https://www.youtube.com/watch?v=7Pq-S557XQU&list=UU2C_jShtL725hvbm1arSV9w
There is a lot of other experimental software that creates smart software that has been out there…and is only getting stronger…it could make software slowly, but it eventually figures it out.
Anyways, my argument is AI is a long way off. It is scary…but it’s far away. What we should be worried about is the stupid intelligence getting just good enough to beat the most of us.