Brent_Allsop replied 13 years ago (Nov 2nd 2010, 9:10:36 am)
In a blog post by Lincoln Cannon:
Lincoln Canon references and comments on the Fox/Shulman video "Super-intelligence Does Not Imply Benevolence"
Here is a copy of a statement I made at both locations, and I'd be very interested in getting a quantitative measure of how many people are influenced, for or against the issue by such arguments.
It'd sure be interesting to see how many people are influenced by some of these arguments being put forth in this Fox/Shulman ECAP video. For me, almost everything he says is obviously mistaken. Just one example being the assertion that a simple ultimate goal, like chess playing, might not be helpful to humans â€“ as the AI being driven by such a simple goal could become intelligent enough to overcome and wipe out humanity, and convert everything to chess playing computer chips.
To me the absurdity of such just makes me laugh. Certainly any AI that is able to compete with us is going to immediately realize the absurdity and worthlessness of such a singular goal, and quickly work with everything it has till such an obviously insane goal is corrected.
We humans have many bad goals hard wired into us, and we constantly work to discover, 'resist' and overcome them. And we aren't yet super intelligent.
I'm not going to give every point and assertion provided here this same treatment, I'll just point out that almost every point he makes, to me, is just as silly for what seems to me to be similarly obvious reasons. Everything that has been said here just more firmly convinces me that the "Concern over unfriendly AI is a big mistake" (see: http://canonizer.com/topic.asp/16/3 ) is by far the best or most moral camp.