I think we will never have human-level AGI in the same way that we will never have bird-level flight - because there is no scalar "level".
Does an F-22 have bird-level flight? In one way the answer is yes and much more - it flies far faster than any bird.
But flight in the real world includes refueling, maintenance and manufacturing. And an F-22's performance in these areas is infinitely inferior to that of a bird; it is entirely dependent on humans to manufacture, refuel and maintain it.
At this point some readers will be thinking that these gaps might someday be filled in given sufficiently advanced nanotechnology. And indeed there is no known law of physics that forbids this.
But when you're working with machine phase rather than living cells, even if you _can_ burden a combat aircraft with the cost and overhead of these capabilities there is no practical reason to do so. The F-22 was built by professional engineers for a practical purpose. If bird-level flight is ever created in centuries to come, it will be done by hobbyists for the coolness factor, and only long after it is of no practical relevance. That is what I mean when I say we will never have bird-level flight in the practical sense: it will never be done by anyone working in their capacity as professional engineers, because the _shape_ of capabilities implied by machine phase is so different from that implied by biology.
The same applies to intelligence. It is not a scalar quantity, but possesses a complex shape. We already have computers that outperform humans in arithmetic by a factor of a quadrillion, yet underperform in almost all other tasks by a factor of infinity. That's a difference in shape of capabilities that implies a completely different path. It will be no more feasible or necessary for AGI to duplicate all the abilities of a human than it is feasible or necessary for an F-22 to duplicate all the abilities of a bird. (Again, I'm not saying an AGI with the shape of a human mind can't ever be created, in a thousand or a million years or whatever from now - but if so, it will be done for the coolness factor, not by professional engineers who want it to solve a practical problem. It will never be cutting edge.)
Furthermore, even if you postulate AGI0 that could create AGI1 unaided in a vacuum, there remains the fact that AGI0 won't be in a vacuum, nor if it were would it have any motive for creating AGI1, nor any reason to prefer one bit stream rather than another as a design for AGI1. There is after all no such function as:
float intelligence(program p)
There is, however, a family of functions (albeit incomputable in the general case):
float intelligence(program p, job j)
In other words, intelligence is useful - and can be said to even exist - only in the context of the jobs the putatively intelligent agent is doing. And jobs are supplied by the real world - which is run by humans. Even in the absence of technical issues about the shape of capabilities, this alone would suffice to require humans to stay in the loop.
The point of all this isn't to pour cold water on people's ideas, it's to point out that we will make more progress if we stop thinking of AGI as a human child. It's a completely different kind of thing, and more akin to existing software in that it must function as an extension of, rather than replacement for, the human mind. That means we have to understand it in order to continue improving it - black box methods have to be confined to isolated modules. It means user interface will continue to be of central importance, just as it is today. It means the Lamarckian evolutionary path of AGI will have to be based, just as current software is, on increased usefulness to humans at each step.
This is why the question of whether AGI will be Friendly or Unfriendly is as relevant as the question of whether it will be bearded or clean-shaven.