Friendly AI relies on each new modification to the AI preserving the core friendly nature of the software. However, since nearly all complex software has bugs, it is likely that any attempted friendly AI will fail: the AI may make some modification to itself and unintentionally end up removing its friendliness.
Paramount of a system to test, recognize, and to increasingly be able to successfully correct or compensate for unfriendly AI is to learn how. I requires to establish and promote time and date when this can be done.
The time when this research is beginning to bear fruit must lay sufficiently before an undefined procedure (unfriendly or mistaken) is acting to wipe out the human race.
A secure environment for an intellectual learning effort on friendly AI is not officially under control or publicly unknown. Short of humans who are learning it, AI must probably control it in a secure location, most likely an oceanic submarine tower resembling nuclear bunker protection against violent destruction.