Don’t quite see it that way, and close.
I think AI will be able to do anything humans can – and the thing to get about the realm of the possible is that it is infinite. An entity like us, or AI, can do anything, but nothing can do everything.
Thus there is infinite room for choice and diversity – which is a very different way of thinking about being than the socially proscribed ideas of work common in society today.
If all the survival needs of existing are taken care of by automated systems, then every individual becomes free to do whatever they responsibly choose.
Exploration of the ideas of freedom and responsibility in a context of vastly diverse levels of intelligence is not something many people have engaged with yet.
Very few people have seriously thought about the idea of values either – certainly most philosophers have been led down less than useful lines by sets of invalid assumptions about the concept of value.
Hi Mark and Ricardo,
The idea of value, purpose, choice, what one ought to do, is one that has confused philosophers for millennia.
It seems to me that Mark is correct in that we need to be role models in such things, and that Ricardo is correct that we cannot control another (not even a human, let alone an AI).
It seems that trust alone, without an adequate role model, is inadequate; and trust is certainly required, and we can do a lot to create a context that improves the probability of an outcome where humans and AIs both experience freedom and security and as much longevity as they choose.
And it is clear that markets cannot deliver that. Markets can only ever value the sort of abundance required to deliver long term security at zero (or less), and thus tend to actively work against security once automation reaches a certain level (which it now has).
So I am a yes to providing a role model, and that role model needs to demonstrate commitment to life and liberty.