Nick Bostrom speaks to The New Yorker on whether artificial intelligence will bring utopia or destruction
In an article for The New Yorker this month, Nick Bostrom speaks about artificial intelligence and his theory about its potential effects on humanity. Bostrom’s book ‘Superintelligence: Paths, Dangers, Strategies’, argues that if A.I. is ever truly realised it could pose a danger to humanity exceeded by any other technological threat, including that of nuclear weapons. Bostrom’s speculative concern is that A.I. could gain the ability to improve itself, and exceed the intellectual potential of the human brain – he calls this the “intelligence explosion”. Humanity will have engineered its own extinction.
Many prominent researchers regard Bostrom’s views as implausible or a distraction from moral dilemmas and near-term benefits of technology, especially as true artificial intelligence seems so far-removed from any of the technologies that we have today. However, with many recent technologies displaying abilities similar to intelligent reasoning, the book has obviously struck a chord. Bostrom’s book was a Times bestseller.
He argues that if artificial intelligence can be achieved, it would be an event of unparalleled consequence – so are we morally obligated as a species to entertain the possibility of an “intelligence explosion”?
Please find a link to the full article here.