This is an extremely deep and interesting question.
Artificial intelligence isn’t a singular thing, it is a spectrum of classes of systems spanning many orders of magnitude, and exponentially expanding for the foreseeable future.
Mankind isn’t a singular entity, it is a spectrum of entities that spans many orders of magnitude in awareness.
The intersection of those two sets of ever changing spectra will produce a vast array of different classes of outcome. Some people will inevitably simplify that entire set of unimaginable complexity down to a simple binary, and call it either good or bad. The reality will be a close approximation to infinitely more complex than that.
Like everything else in life, it will very much come down to the choices we make when interacting with it (providing we manage to make it somewhere near at least as cooperative as we are, and don’t create a dangerous monstrosity with an overly simplistic set of valence heuristics that leads to our destruction (and there are enough people aware of that problem space that it is a reasonably low probability outcome)).
It is likely the the diversity of what it is to be human will increase, which will be a difficult thing for many on the conservative end of that spectrum of what it is to be human.
It is likely that communication will become more difficult, and impossible in many instances, as the diversity and complexity of interpretive schema with no significant overlap continues to increase. AI will both empower, and moderate the worst of the risks in this class of outcomes, as it will be the best translator available, as much as translation is possible.
If all goes well, it will be an age of magic and security for most, as AIs allow us to do more with less in ever more domains, the experiential choices available to most people people will continue to exponentially increase. There may not be enough energy to allow everyone to fly everywhere every day, but there will be technology to allow us to experience first person flying small drones, or experiencing life as a bird or a fish (fitted with transmitting sensors), or anything else. Our personal energy budgets may restrict us to a couple of world tours a year in person, and the communications networks will allow us to go anywhere, any time, as virtual entities (taking our inputs from sets of sensors in those places, talking to whomever will talk to us, seeing what is there to be seen).
Our health and lifespan will be optimized and personalized, and most will live as long as they want (many thousands of years in most cases). Accidents will still happen, and they will become more and more rare.
All of the causes of risk to life from our past will have effective and constantly improving mitigation strategies, and reality seems to be sufficiently complex that there will always be new risk emerging in our future that will require attention from time to time (the price of liberty is eternal vigilance).
So AIs in their many different levels of instantiation seem likely to allow us each to customize our experience of being to a level never before possible. They will be within us as part of our nanotech enhanced immune systems, around us and smart devices and buildings, supporting us as smart roads, water supplies, food production, recycling, and energy systems; constantly working to assure there is sufficient supply to meet all demand (all products and services), mentoring us as AGIs. Many of the ideas from our past (like money and exchange) will be redundant.
What we each make of that will be down to our own choices, the degrees of awareness we bring, the strength of our wills, the respect and tolerance we have for diversity.