David Brin on Post human AI

Preparing for our posthuman future of artificial intelligence

By David Brin
“Each generation imagines itself to be more intelligent than the one that went before it, and wiser than the one that comes after it.” – George Orwell
What will happen as we enter the era of human augmentation, artificial intelligence and government-by-algorithm?

I greatly admire David as a creative thinker.
It seems clear to me that his ideas around free information are very close to something, yet he has not explicitly stated the most powerful context.

And several things seem to be missing from this article.

I align with the idea of open information in public spaces, and it needs to be combined with the idea of distributed networks of both information and trust at all levels, and technology to automate sharing within such networks, and at certain nodes between such networks.

The continued reliance on the idea of money and exchange seems to me to be fundamentally flawed.
If anyone seriously thinks that money is a good measure of what is really important, then simply take away the things that have no monetary value and see what happens.
Take away air, rain, oceans, gravity – and see how easy it is to survive?

In an age when most things were genuinely scarce, money was arguably a reasonable tool. It has sort of worked most of the time.

And now we are in an age of exponentially expanding sets of fully automated systems capable of delivering an exponentially expanding set of goods and services in the same sort of universal abundance as air. Any such universally abundant thing must have, by definition, zero market value.
Money is not a useful tool (at any level) to manage such things.
Dividing by zero does not give sensible outcomes (as any computer programmer knows – or at least any who started in the 70s, and came through the 80s).

So in this context, the idea of a high Universal Basic Income seems to be a useful transition strategy to a post scarcity set of social systems.

The idea of being more intelligent than past generations has two aspects to it.
In one sense, the idea seems false – in that we all seem to have the same basic cognitive capacity that humans seem to have had for many thousands of years.
In another sense, in the sorts of models, tools, habits and ways of thinking about things that we have in our intellectual toolkit, that seems to be undergoing exponential change. And not all people are equally exploratory or open to new heuristics or algorithms or paradigms (levels of abstraction).
So we see the distributions of particular modes of being moving out into areas of greater complexity with ever longer “tails” on the distributions, at the same time as the number of dimensions and levels of abstraction are also increasing.
Wolfram has continued the work of Turing and many others, demonstrating not only that computational space is infinite, but also that there are many classes of maximal computational complexity in even the simplest of systems. Thus even if reality were fully lawful, if everything followed some unbreakable set of patterns, their outcome could still be unpredictable.

It seems to me that reality is most likely to be some fundamental mix of the lawful and the random, that summed over large numbers leads to predictability in some contexts, but not all.

If that latter conjecture is true (which seems to be a likely interpretation of the equations of QM), then many more aspects of reality become fundamentally uncertain.

In the context of all of the above, it seems that it may be possible to understand our experience of reality as software entities experiencing a software model of reality in brains in bodies that have been shaped by many levels of heuristics selected by evolution (the simple expedient of survival) in realms genetic, mimetic and beyond.

So to me people claiming that we do not understand what our awareness is doesn’t ring true. Certainly, our awareness is the result of vast amounts of processing and heuristics at many different levels (about 20 in someone near the limits of modern understanding), and cannot possibly understand itself in detail, and we can have a broad brush understanding of the classes of systems at play.

Silicon based life forms will have certain sorts of computational advantage over our sorts of biological life, yet the heuristics we have give us approaches to many sorts of problems that raw computation doesn’t beat.

I suspect that all of ethics are at base evolutionary heuristics in a fundamental sense.

If one takes a games theoretic view of the systems present, then it is clear that raw cooperation is always vulnerable to cheating strategies, and requires secondary strategies to prevent invasion by cheats. The spaces of possible cheating and stabilising strategies seem to both be infinite, involving eternal vigilance if one is to retain freedom. Recurs that through as many levels of abstraction as one is willing to put in the time and effort to achieve – and in my explorations it seems to hold, so in a meta-mathematical induction sense, I consider it proven, understand all the levels of uncertainties relating to anything dealing with reality.

So there is nothing simple, and there is the hint of something asymptotically approaching stability and security and freedom (acknowledging fundamental issues of uncertainty when dealing with exploring infinite spaces of possibility).

Games theory is clear, that in environments where the greatest threat comes from entities like ourselves, then competitive modalities will have greatest survival value, while in environments where the greatest threats to our existence are from outside factors, then in-group cooperation can deliver the greatest probability of survival. All of ethics can be viewed as higher level strategic heuristics in this sense.

Evolutionary time has certainly had many periods when in group factors dominated survival, and for our species it seems that for most of our evolutionary history cooperation has been of greater benefit.
Thus we all contain many levels of both sets of systemic heuristics and associated strategic systems.

And today we have the technology to deliver abundance to all in a way that will sustain cooperation at the highest levels, while still allowing for many levels of expression of competitive modalities, within the contexts of universal respect for life and liberty.
And there can be no hard or clear boundaries in such complex systems. There must be uncertainty and flexibility at all levels.

So to me, it is clear that much of what we currently call finance can be more accurately characterised as cheating strategies on the cooperative that is humanity. We must go beyond that, if we seek reasonable probabilities of survival.

So I am not in any sense advocating a system that promotes predatory or parasitic modes that pose unacceptable risk to the lives of any individual. And I am being clear that all individuals carry the potential for infinite modes of possible expression of the possibilities of being.
I agree with David in the sense that preaching wont work, that we require active strategies to remove any benefit gained by the use of “cheating” strategies (any level) plus a bit (but not too much – Ostrom’s work is clear, it must always be in the transgressors interests to return to a cooperative modality if stability is to be achieved).

Money and markets are not useful tools for highest level decision making when significant levels of abundance are present. A different set of values are required.

How we chose those value, and how we negotiate boundaries in specific contexts will define us, and our survival probabilities.

I suggest that an absolute minimum set of values for reasonable survival probabilities is a universal respect for individual life and the individual liberty of all entities capable of modeling themselves in their own model of reality, and of making strategic abstractions and distinctions and choices that influence the probability of their future behaviour.

It seems that the vast majority of our species fall into this set, and that we may soon be joined by many non-biological entities.

It seems to me that it is the freedom of strategic trust networks historically associated with markets that are at the heart of freedom, not the markets themselves.
It is those networks, not the scarcity based concept of markets, nor their value metric money, that are of value to freedom loving entities.

[followed by]

I understand the reality of poverty, and the systemic causes of it (using markets to value things).

I give $10 a day to the UN WFP – which is something practical and immediate.
Most of my time and energy goes towards changing the ideas that deliver the systemic incentives we have that create poverty.

Cooperation is fundamental.
Trust is fundamental.
Effective strategies to prevent invasion and overwhelm by cheating strategies is fundamental.

One has to take a multi level approach, with the major focus on the long game outcomes.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Our Future, understanding and tagged , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s