Topic: Friendly AI Importance

Camp: Agreement / Such Concern Is Mistaken

Camp Statement History

Objected
Live
Not Live
Old
Statement :

Concern over "Unfriendly" AI is a Mistaken Waste of Time


We believe the notion of Friendly or Unfriendly AI to be silly for different reasons described in subordinate camps.
The Friendly AI may be almost impossible to secure camp claims that an AGI "may make some modification to itself and unintentionally end up removing its friendliness". But we believe it to be obvious that the difficulty of achieving any ability to test for, recognize, and to increasingly be able to successfully correct or compensate for such to be many orders of magnitude easier then any kind of AGI that can increasingly improve itself in any kind of effective way. In other words, comparing these in this way is akin to thinking that some day an AGI will be able to write poetry, yet perhaps still require humans to translate the binary ASCII values, with a hard copy ASCII chart, before we can read the poetry.

Intelligence is necessarily Moral


We believe that any supper intelligence will necessarily be moral or 'friendly'. A corollary being "unfriendly AI" is impossible. Since we accept this principle as obviously true, we believe even discussing something like "unfriendly AI" as a mistaken waste of time.
We define 'supper intelligence' in this case to be that which at least has the intelligence required to make humans immortal. Humanity and/or AIs will achieve this level of intelligence once the last person dies.
Once intelligent beings become immortal, they must realize the following logical doctrines.
  1. All intelligences will necessarily have as their ultimate goal getting what all intelligences truly want.
  2. The more entities that co-operate, the easier it is to get what they want.
  3. True justice is good for and therefore the goal of all intelligent beings.
  4. If there are disagreements in priorities between intelligences, it is better to have what others want as your top priority, first. While assuming it will eventually be everyone's turn to get what they want.





Edit summary : Improve statement
Submitted on :
Submitter Nickname : Brent_Allsop
Go live Time :
Statement : We believe the notion of Friendly or Unfriendly AI to be silly for different reasons described in subordinate camps.
The Friendly AI may be almost impossible to secure camp claims that an AGI "may make some modification to itself and unintentionally end up removing its friendliness". But we believe it to be obvious that the difficulty of achieving any ability to test for, recognize, and to increasingly be able to successfully correct or compensate for such to be many orders of magnitude easer then any kind of AGI that can increasingly improve itself in any kind of effective way. In other words, comparing these in this way is akin to thinking that some day an AGI will be able to write poetry, yet perhaps still require humans to translate the binary ASCII values, with a hard copy ASCII chart, before we can read the poetry.
To us such blatant mistakes in value judgment comparison reveals the moral capability of the supporters of such camps. We believe it would be beneficial to society to have 'canonizers' that recognize people supporting such camps which we believe to be so obviously mistaken so that when people selected such a canonizer it would decrease the influence of such people's support in determining the quantitative value of any camps they support. Hopefully this would lessen any destructive effects of such mistaken fear mongers on society.

Edit summary : In response to David Wood's recently added compeeting camp.
Submitted on :
Submitter Nickname : Brent_Allsop
Go live Time :
Statement : We believe the notion of Friendly or Unfriendly AI to be silly for different reasons described in subordinate camps.

Edit summary : First Version
Submitted on :
Submitter Nickname : Brent_Allsop
Go live Time :