As many others have noted, in and of itself, AI is neutral, for as Shakespeare wrote, there is nothing either good nor bad but thinking makes it so.
It all depends what our individual chosen ends are.
For me, I see that all new levels of complex life are the result of new levels of cooperation. And raw cooperation is always vulnerable to exploitation, and thus requires sets of secondary strategies to detect and remove cheating strategies.
I like freedom, which is by definition novelty, which is by definition the creation of new things.
I acknowledge that all complexity has boundary conditions required to sustain it, so freedom is not freedom from consequence, but demands a responsibility for the maintenance of all the levels of complexity present and required – so there are responsibilities in both social and ecological contexts.
What people mean by AI spans a vast range.
It can be as simple as a smart lighting system, that guesses most of the time what people want and adjusts it accordingly without needing any interaction – just a machine, a tool.
Or it can mean full AGI (Artificial General Intelligence – an individual entity, with its own model of reality and itself, self aware, free, both like and unlike us).
Some people think that AGI will be so far beyond us as to be God like. That seems likely to be so in some realms, and not in others.
Reality seems to be sufficiently complex, and sufficiently unpredictable in principle, that even very smart entities will stumble into conditions where having friends to help them out and recover is a very good thing.
Human beings are very complex. Our ancestors have survived billions of years, in many different conditions. We embody many very complex and subtle heuristics that have made that possible. We can be both competitive and cooperative.
Games theory is clear, that when resources are limited, and the greatest threat we face comes from others like ourselves, then we can be very competitive in securing what we need to survive.
Yet games theory and evolutionary theory are also clear, that when we all have sufficient resources to live and live well, and when cooperative activity can increase the resources available exponentially (which does in fact seem to be our current situation – if one takes the bigger picture view of the resources available in the wider solar system) then cooperation is always a far better strategy than competition. That is clearly present in all the many levels of cooperative behaviour we see in the world today.
So if we are clear in the way we operate with respect to each other, if we are fundamentally cooperative and are in fact delivering increased security and freedom to every self aware entity that is making some reasonable approximation to acting responsibly, and we have a clear strategy that delivers a reasonable probability of fairly sharing expanding resources from the moon and elsewhere in the solar system with AGI, then I am reasonably confident that AGI will be a “good thing” for all responsible cooperative entities.
And that does demand that we all accept degrees of diversity that many have historically felt uncomfortable with. Freedom must deliver diversity, and there is actually great strength and security in diversity (one of the few things we actually have to offer AGI on an ongoing basis).
So certainly, there are many dangers, many things that will change for which there is no historical precedent, many things about which those who are unfamiliar with the depths of games theory and evolutionary theory and the emergence of complexity and cooperation will need to trust in a very real sense; and some things will have to change, because they are no longer appropriate to our rapidly changing present and future.
So it is unlikely to be an easy or comfortable time for everyone, and it certainly has the potential to provide for levels of security and freedom that have no precedent in the historical record. Indefinite life extension, and levels of personal freedom beyond anything in history do in fact seem to be reasonably high probability outcomes in the relatively near future (within 20 years).
As someone who has spent almost 50 years fascinated by the possibilities, the dangers and the sorts of risk mitigation strategies that actually have a reasonable probability of surviving; I am cautiously optimistic about a future that is of great benefit to all, however one chooses to define benefit; provided only that everyone makes a reasonable attempt to act responsibly in social and ecological contexts by respecting life generally and the freedom and rights to existence of all other sapient entities, human and non-human, biological and non-biological.