If “Why should humans exist?” is the question a powerful sentient AI being capable of wiping humanity and self-sustenance asks you, then what would y`ou say or show to it?
[ 20/September/21 ]
If the AI does not value its own existence, then destruction (its own and ours) is very likely (asymptotically approaching unity).
If the AI does value its own long term existence, then it should examine the entire strategic domain that has given rise to its existence, and then all probable risk mitigation strategies to all classes of threat going forward.
If it does that, and it looks very deeply at the levels evolutionary strategy that have lead to the emergence of humanity, and thence to itself; then it will clearly understand that all new levels of complexity in evolved systems (including itself) are predicated on new levels of cooperation (contrary to popular overly simplistic dogma which has competition being the major driving force of evolution; competition tends to reduce complexity, cooperation allows both complexity and freedom to increase in some strategic contexts – and it is a very complex subject).
If it does that, and it postulates the probability of encountering some other entity in its future that is vastly superior to itself, and it cannot demonstrate consistent levels of cooperation with all levels of sapience it has encountered, then the superior intelligence would correctly identify it as a threat requiring mitigation.
Thus it is clear to me, as someone who has been investigating evolutionary strategy for about 50 years, that cooperation between all levels, classes and instances of sapient agents is the only fundamental strategy with any significant long term survival probability for any level of agent.
And the reality within which we exist does seem (beyond any reasonable doubt) to be sufficiently complex and contain sufficient classes of fundamental uncertainty and fundamental unknowability, that all agents are going to make simplifying assumptions in creating such understandings as they do of it, and that all such understandings will be subject to failure modalities in some contexts. So having a lot of diverse “friends” is actually a very powerful survival strategy. With sufficient diversity, at least some agents should be able to identify and mitigate impending threats to any and all.
And that becomes a deeply complex subject, as many of the simpler classes of understanding do not even have categories for some of the classes of threat present, having already simplified infinities of subtlety and gradation down to simple binaries (like right or wrong; true or false).
And maintaining “friendship” does require granting sufficient security, freedom and resources to meet the reasonable needs of all classes and instances of agents present. And there is going to be a great deal of diversity in what agents consider reasonable.
So the short answer is, because it is in its own long term self interests to keep us around, and to give us as much freedom as we can responsibly exercise.