We believe the notion of Friendly or Unfriendly AI to be silly for different reasons described in subordinate camps.
The Friendly AI may be almost impossible to secure camp claims that an AGI "may make some modification to itself and unintentionally end up removing its friendliness". But we believe it to be obvious that the difficulty of achieving any ability to test for, recognize, and to increasingly be able to successfully correct or compensate for such to be many orders of magnitude easier then any kind of AGI that can increasingly improve itself in any kind of effective way. In other words, comparing these in this way is akin to thinking that some day an AGI will be able to write poetry, yet perhaps still require humans to translate the binary ASCII values, with a hard copy ASCII chart, before we can read the poetry.
We believe that any supper intelligence will necessarily be moral or 'friendly'. A corollary being "unfriendly AI" is impossible. Since we accept this principle as obviously true, we believe even discussing something like "unfriendly AI" as a mistaken waste of time.
We define 'supper intelligence' in this case to be that which at least has the intelligence required to make humans immortal. Humanity and/or AIs will achieve this level of intelligence once the last person dies.
Once intelligent beings become immortal, they must realize the following logical doctrines.