Goodreads – Ask Richard Dawkins a question
A question for Richard, which is more like 3 in one:
I am keen to know, if in the light of Axelrod’s work on the role of secondary strategies in stabilising cooperation at every level and preventing invasion by cheats; what do you think the chances are of humanity generally developing a cooperative framework that uses advanced automation technologies to support every individual to self actualise in whatever way they responsibly choose, any time soon?
This is in the context where responsibility in this sense means taking reasonable actions to mitigate all reasonable risks to the life or liberty of every other sapient entity. And also in a context where any such universal abundance will by definition have zero market value (just as oxygen in the air has zero market value – even though it is arguably the most important thing to any one of us).
Do you think humans are sufficiently evolved to be able to tolerate the exponentially expanding diversity of conceptual and behavioural phenotypes that must logically result from such an environment?
Do you, as I do, think such a thing is a desirable state to aim for?