David Oban – Moral Ambitions

David Oban – Moral Ambitions

Very much enjoyed your talk David.
Agree that there are risks in lack of privacy, just as there are risks in privacy.
I suspect that the sort of domains of gradations of public and private zones that we have now will continue to evolve more gradations.
Homes and private spaces can be private.
Use of public spaces will be public.
Reasonable use of information leaking across boundaries is allowed (if you’re talking loudly in your house with the window open it is not the fault of the passer by that they heard what you had to say, as distinct from using high tech devices to monitor such conversations).

Agree with you that UBI (universal basic income) might be a useful intermediary transition strategy between scarcity based exchange values and fully automated and distributed abundance for all.
Ideas like copyright and patent can only apply in commercial realms (and even there only for limited periods – two years normally and five years under exceptional circumstances), and cannot possibly apply in non-commercial realms. No one can un-think an idea just because they didn’t get to the patent office first. It might be reasonable to stop people using such ideas for sale in a market for a limited time, but not for private use.

Agree with you that distributed networks are the key to security, and that includes distributing the data, the processing, the network architecture and mechanics itself (none of which are mainstream at present). And we need to always be conscious of the need for secondary strategies in such cooperatives to prevent exploitation and destruction by cheating strategies, and the logical need for eternal and ever recursive vigilance to ensure the existing suite of strategies is actually working.

From my experience over the last 4 decades, the hardest idea to get across is that the coordinating role of markets as outlined by Hayek can be effectively replaced by fully cooperative strategies. Most people seem so devoted to the competitive aspect of evolution, they fail to see that all advances in the complexity of life (at all levels) are characterised by the emergence of new levels of cooperation, and that cooperation is becoming ever more dominant over competition (in the resulting realms of abundance).

And in this realm, it is crucial to understand one of Ostrom’s key insights into successful commons management, that the punishment has to fit the crime, within quite close tolerances. Too little punishment and the cheating persists. Too much punishment and the one who was cheating sees no possibility of future benefit in returning to cooperative behaviour, and so becomes a destructive external force.
Clearly many of our current suite of social institutions (legal and otherwise) have a long way to go in this regard – particularly many web admins who simply ban people indefinitely, rather imposing a period of censorship after a clear warning of exactly what was considered unacceptable. Some, like the evonomics site, just ban without explanation or warning, with no right of challenge or natural justice.

I am clear that one cannot understand human behaviour without understanding the essence of complexity theory and games theory, and seeing both of those in the context of the holographic way in which evolution works with all influences simultaneously over the long term, in determining the genotype and phenotype of a population (at both genetic and recursive mimetic levels).

And it is also crucial to draw a clear distinction between levels of intelligence and full sapient intelligence.
To be classed as a fully sapient awareness, deserving of all the freedoms of life and liberty you and I so treasure, it is clear that an awareness must be able to model itself and others within its own model of the world, and must also be able to influence the constructs of that model, and both the content and the context sensitivity of the value sets it is using to determine behaviour.
Sure we all get our language and our starting value sets implicitly from a mix of genetic and cultural influences, and as self aware and self determining entities we can, through intention and practice, develop new contexts, new behaviours, new levels of awareness and opportunities for action and influence.

The work of many thinkers has clearly shown that the classes of possible value sets, possible algorithms, possible levels of awareness, are all infinite, and that one could spend the rest of eternity investigating any infinity, and still be a close approximation to ignorant with respect to that which remains unexplored.
So I see no rational grounds for supposing that any fully sapient AI will necessarily be any more omniscient than any of us. Sure it will be able to solve some classes of simple problems very much faster than us; and there will remain many classes of very interesting problems that will present just as much challenge to it as they do to us.

Much as I admire Dan Dennett in particular and Eliezer Yudkowski I challenge the assumptions that both make that the universe is causal.

It is clear to me that the mathematics of Quantum Mechanics (as described by Richard Feynman and many others) point to the fundamental fabric of reality being stochastic (random within probability distributions) – Feynman said as much several times. And certainly, by the time you get large enough collections of that fundamental stuff existing for long enough for humans to perceive, then those probability distributions become so densely populated that what we experience is (in most instances), to an approximation that is far smaller than any measurement error we have managed to achieve to date, causal in its behaviour.

It seems clear to me that only in a universe that is such a mix of the random and the causal (a complex system, with recursive levels of constraints effecting agent behaviour), do the ideas of free will or choice or morality make any sense.
If there was hard causality at the base of the system, then every word I have written here was predestined to happen some 14 billion years ago, and I have no existence as a causal entity (all choice is all illusion) – that is the logical necessary outcome of having causal turtles all the way down.

So I am clear that what I experience as reality is a subconsciously created model of reality that my brain assembles from a mix of current inputs and past experience as conditioned by every level of contextual influence that exists (all as matters of probabilities with fundamental uncertainties). And I am also clear that mathematics and logic are great modelling tools, and give me the best approximations possible of reality, and I am under no illusion that they do anything more than approximate reality.

As Dawkins so beautifully describes in Unweaving the Rainbow, we are all the most improbable outcomes of the lotteries of birth and survival, yet the process has to deliver something with the general sorts of characteristics we see. Bipedalism is an efficient way of delivering appendages that can make and use complex tools, but trunks and beaks can work too. As evolution does its semi-random walk through possibility space it is far more likely in any specific line to go backwards towards the simplicity from whence it emerged than it is to explore more deeply into the energetically costly realms of complexity. Only in a very small subset of domains is the exploration of the more complex more stable in the long term. I am very clear from my explorations over the last 42 years that competitive markets based on exchange values do not provide an environment of long term security for self aware entities such as ourselves and any emergent AGI to live peacefully and cooperatively together.

I am forever grateful to the work of pioneers like Robert Axelrod and Elinor Ostrom who demonstrated clearly the sorts of strategic environments that allow cooperative entities to develop secure and operative trust relationships that last and deliver security for all, in as much as security is possible in any open system.

I make the clear assertion (which is to me beyond any shadow of reasonable doubt) that the more people who are able to see beyond the implicit boundaries of markets and money, the greater the probability of survival for us all.

Those tools that served us so well in times of genuine scarcity, have no valid place in realms of real abundance that must be the logical outcome of continued exponential growth of computational performance, and continued exploration of algorithms and tools that allow such systems to effectively process matter and energy at ever finer scales.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in economics, Our Future and tagged , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s