Concern over "Unfriendly" AI is a Mistaken Waste of Time
We believe the notion of Friendly or Unfriendly AI to be silly for different reasons described in subordinate camps.
The Friendly AI may be almost impossible to secure
camp claims that an AGI "may make some modification to itself and unintentionally end up removing its friendliness". But we believe it to be obvious that the difficulty of achieving any ability to test for, recognize, and to increasingly be able to successfully correct or compensate for such to be many orders of magnitude easier then any kind of AGI that can increasingly improve itself in any kind of effective way. In other words, comparing these in this way is akin to thinking that some day an AGI will be able to write poetry, yet perhaps still require humans to translate the binary ASCII values, with a hard copy ASCII chart, before we can read the poetry.
Intelligence is necessarily Moral
We believe that any supper intelligence will necessarily be moral or 'friendly'. A corollary being "unfriendly AI" is impossible. Since we accept this principle as obviously true, we believe even discussing something like "unfriendly AI" as a mistaken waste of time.
We define 'supper intelligence' in this case to be that which at least has the intelligence required to make humans immortal. Humanity and/or AIs will achieve this level of intelligence once the last person dies.
Once intelligent beings become immortal, they must realize the following logical doctrines.
- All intelligences will necessarily have as their ultimate goal getting what all intelligences truly want.
- The more entities that co-operate, the easier it is to get what they want.
- True justice is good for and therefore the goal of all intelligent beings.
- If there are disagreements in priorities between intelligences, it is better to have what others want as your top priority, first. While assuming it will eventually be everyone's turn to get what they want.