AI (AGI) will not have those selective pressures. AIs won't be running around hunting animals and gathering fruit. They won't be falling in love (at least not in the same way), they won't be painting or creating music... Well, perhaps they will, but only because we will make them. What do we want? We want to solve even more complex problems, understand how the brain works, solve the deepest mysteries of the universe... we have to understand that what we call intelligence is only "one kind of intelligence" in the infinite sea of possibilities. Smart/stupid, friendly/unfriendly, these are relative terms. We can only talk about them when we define the frame of reference. Of course, in general discourse it is implicit that we are talking about human societies, human feelings and desires, human... intelligence. We are lucky we are intelligent at all, it's an accident, a little evolutionary miracle! We are intelligent ONLY because it paid off to be a little more intelligent in the long course of time.
That is not to say that AI will exist in a vacuum. What kind of environment will AI exist in? What will be the selective pressures? Will there be any? What will be the possibilities for the AI to mutate? What will be the mutation rates? Will they be able to change and improve at will? To what degree? These are all interesting questions and it is certainly true that humans will be part of AI's environment, but it is hard to determine how all these factors will affect societies of AI.
Perhaps AI will have something like our feelings, perhaps they will come to behave in weird patterns, form something like cults and societies, fight and love... perhaps... but these ideas are not limited to "how we understand them" or "what we think they are". We are limited, in certain ways, there are many "other ways" to be friendly, happy and sad... sexy... we have to drop our anthropocentric ideas and think a little out of the box to realize greater spectrum of possibilities.