Superintelligence: a Book, an Hypothesis, a Warning
After having earlier dismissed Artificial Intelligence as a bogeyman, I confess to being deeply frightened by the book, Superintelligence: Paths, Dangers, Strategies. (2014)
The book’s author is Nick Bostrom, director of the Future of Humanity Institute and director of the Strategic Artificial Intelligence Research Centre at the University of Oxford. You can view his academic creds on Wikipedia. There’s an excellent profile of him—highly recommended—in The Guardian* at Guardian profile of Nick Bostrom
If you’ve heard much about Nick Bostrom or the Future of Humanity Institute, what follows could be a rehash. But I’ll plow ahead even though this book is four years old, for what it may be worth.
In the first paragraph, The Guardian puts the scope of Bostrom’s concerns this way: “Notably: what exactly are the “existential risks” that threaten the future of our species; how do we measure them; and what can we do to prevent them? Or to put it another way: in a world of multiple fears, what precisely should we be most terrified of?”
The Guardian’s piece identifies Bostrom’s key themes, and is so informative (including telling nuances such as Bostrom’s finicky diet and germ phobia) that I have little of substance to add on the man himself, but the following is my take on the most salient messages from his signature work, Superintelligence.