AI – Robotics – Updated

AI and Robotics at an Inflection Point – Steve Omohundro

at about 37:00 A participant asks :If there is no intrinsic value to morality, why should we add to it?
Omahandro replies – in 2,000 years we don’t have a very good basis for morality.

To me that answer is false.

There is a very clear intrinsic value to morality.

Morality is a relatively simple first order approximation to a meta-stabilisation strategy to prevent cheating in a high order cooperative systems.

The work of Robert Axlerod and others established Games Theory and demonstrated that all cooperative strategies require attendant strategies to prevent “cheating” strategies from invading.
It is clear to me from my own investigations over the last 40 years that this holds true at a potentially infinite sets of levels of abstraction.

In 1974 I completed my undergrad biochem studies and it became clear to me that indefinite life is the default mode for life generally, and only complex organisms have evolved death of the individual as a function of age. All living organisms are part of an unbroken chain of life going back over 2 billion years. It seemed clear to me that aging needed to evolve to allow for complexity to evolve quickly, to remove the “genetic drag” of very long lived and successful individuals in populations. Only populations that all aged could evolve quickly, consistently. That being the case, it was logically clear that it is possible to extend human lifespans indefinitely (I just didn’t know how, but I did know the general form of the process, and we are very near the end of that process). That being the case, the dominant question became – what sort of social, political and technical institutions are required to provide the sort of security that would allow potentially immortal organisms a reasonable probability of living a very long time with the freedom to self actualise in whatever manner they responsibly choose?

It is also possible to view evolutionary history (both genetic and mimetic) from the perspective that all major advances in complexity are characterised by new levels of cooperation, which include new levels of attendant stabilising strategies.

It can be demonstrated (following the work of Wolfram in general areas of computational and algorithm spaces), that it is in the self interest of all high order awareness to cooperate with other high order awareness, for the long term survival and freedom of all. Part of that process involves the risk mitigation strategy of distributing power (at all levels) so that no single entity or subset of entities can easily adopt new cheating strategies that overwhelm the cooperative.
Morality is a good first order approximation to such a system for entities that have not yet evolved to the point that they have the mathematical and conceptual ability to reach the above conclusion for themselves. Part of that process involves disinventing markets and money, and using automation to provide universal abundance (the reason for that is that market values are fundamentally based in scarcity, and therefore markets have too many meta incentives that are actually anti abundance for the majority of entities – which is fundamentally unstable for all in the long term).

Thus morality can be shown to be an effective stabilising strategy to allow survival of cooperating entities through the lower orders of awareness until they reach the level that they can see for themselves that cooperation is in the long term survival interests of every individual.

[followed by – email to Steve via his website – http://steveomohundro.com/]

Hi Steve,

Just watched your presentation from last September – AI and Robotics at an Inflection Point.
I very much enjoyed it, and we seem aligned on many levels, and many themes, but not all.

At about 37:00 into the discussion a participant asks:
If there is no intrinsic value to morality, why should we add to it?

You replied – in 2,000 years we don’t have a very good basis for morality.

To me that answer is false.

It seems clear to me that there is a very clear intrinsic value to morality.

Morality is a relatively simple first order approximation to a meta-stabilisation strategy to prevent cheating in a high order cooperative systems.

The work of Robert Axlerod and others established Games Theory and demonstrated that all cooperative strategies require attendant strategies to prevent “cheating” strategies from invading.
It is clear to me from my own investigations over the last 40 years that this holds true at potentially infinite sets of levels of abstraction.

In 1974 I completed my undergrad biochem studies and it became clear to me that indefinite life is the default mode for life generally, and only complex organisms have evolved death of the individual as a function of age. All living organisms are part of an unbroken chain of life going back over 2 billion years. It seemed clear to me that aging needed to evolve to allow for complexity to evolve quickly, to remove the “genetic drag” of very long lived and successful individuals in populations. Only populations that all aged could evolve quickly, consistently. That being the case, it was logically clear that it is possible to extend human lifespans indefinitely (I just didn’t know how, but I did know the general form of the process, and we are very near the end of that process). That being the case, the dominant question became – what sort of social, political and technical institutions are required to provide the sort of security that would allow potentially immortal organisms a reasonable probability of living a very long time with the freedom to self actualise in whatever manner they responsibly choose?

It is also possible to view evolutionary history (both genetic and mimetic) from the perspective that all major advances in complexity are characterised by new levels of cooperation, which include new levels of attendant stabilising strategies.

It can be demonstrated (following the work of Wolfram in general areas of computational and algorithm spaces), that it is in the self interest of all high order awareness to cooperate with other high order awareness, for the long term survival and freedom of all. Part of that process involves the risk mitigation strategy of distributing power (at all levels) so that no single entity or subset of entities can easily adopt new cheating strategies that overwhelm the cooperative.

Morality is a good first order approximation to such a system for entities that have not yet evolved to the point that they have the mathematical and conceptual ability to reach the above conclusion for themselves. Part of that process involves disinventing markets and money, and using automation to provide universal abundance (the reason for that is that market values are fundamentally based in scarcity, and therefore markets have too many meta incentives that are actually anti abundance for the majority of entities – which is fundamentally unstable for all in the long term). Markets certainly had a utility in the past when most things were in fact genuinely scarce, but now that we have the ability to automate the production of any and all goods and services, we have the potential to create a post scarcity world, and markets and market values break down when scarcity drops to zero – as at zero scarcity the price point is zero.

Thus morality can be shown to be an effective stabilising strategy to allow survival of cooperating entities through the lower orders of awareness until they reach the level that they can see for themselves that cooperation is in the long term survival interests of every individual.

I align with you in part when you said that you identify with humanity as a whole, and I go further, to identify with all sapience (human and non-human, biological and non-biological).

I see giving birth to AI in our current social paradigm that is dominated by market based values, where market freedom is falsely equated with human freedom, to be one of the greatest threats to our short term survival. It seems clear to me that only when we have transcended market values, and have instantiated systems that ensure the survival and freedom of all responsible (where responsibility in this sense means acting in ways that increase the freedom and security of all) humans will we have an environment where we can safely allow AI to evolve through the dangerous lower levels of awareness.

I think after 40 years of doing my own thing in this area, I am now looking for opportunities to work more closely with others of similar persuasion (if you have any opportunities available – I’d be interested – while I am looking to go beyond money and markets, I acknowledge the reality that one needs money to survive in today’s world).

Very interested in your response.

Arohanui

Ted

[Steve responded and we traded 3 emails where our responses were too tightly interlinked to tease my responses apart and keep the meaning clear.
Then Steve sent me 2 links:
]

http://selfawaresystems.com/2011/07/29/talk-at-monash-university-australia-rationally-shaped-minds-a-framework-for-analyzing-self-improving-ai/

http://selfawaresystems.com/2012/03/30/rational-artificial-intelligence-for-the-greater-good/

to which I replied:

Hi Steve

Sapience for me includes wisdom – it is, after all, our species name, and is supposedly our distinguishing factor (stop that maniacal laughter – please 😉 ).

Wisdom for me necessitates multilevel modelling.

I agree that there is a continuum of responses and that single molecules are capable of very complex responses to multiple stimuli, but not modelling as I would define it, and it seems clear to me that to fit the definition wisdom, the system must be capable of generating multiple models that are capable of projecting possibilities into the future, and must be capable of restructuring the decision systems which determine which of the models are selected (in other words, the system is constantly recursively evolving).

That is level 1 sapience for me.

I agree with most of what you say in “Autonomous Technology and the greater good”, but when you talk about proving systems, you say that proving semantics is more difficult, I would say impossible. You did kind of admit that safety in action in reality is not actually provable. All we can do is work with probabilities. Having been programming for 40 years I would agree with that.

I agree completely about extending cooperative human values to autonomous technologies.
I have grave reservations about their use in governance, and am beyond any reasonable doubt that market based economics are one of the greatest threats to both prosperity and security at the systemic level over the longer term – and even in the shorter term.

The killer sentence in your paper “rational ai and the greater good” is at the bottom of page 4 – “It can be considered the formula for intelligence when computational cost is ignored.”

But the reality is that we live in highly uncertain environments, and we must use the heuristics that seem most applicable to the context of the moment – which seems to be what evolution at the genetic and cultural levels has done to the neurophysiology of our brains such that it is what we actually seem to do most of the time (at least to a good first order approximation – particularly so with respect to religions and the halting problem).
It seems to be very much the case that “context is king” within the human mind, and I suspect within any practical intelligence that must face the infinite monster that is the halting problem (with all its subclasses).

No practical intellect has the luxury of infinite computational resources, as you admit.
However, you don’t seem to accept the next logical inference, which is that the heuristics used to select priors and optimisations will be heavily biased by individual experience, and individual contexts. Those heuristics can come from genetics or culture or experience or contemplation of any set of abstract spaces. Whether they come from the amazing complexity that is the neurochemistry of the synapse mediated by the levels of software running on those synapse and the myriad of other electrochemical influence present in the massively parallel system that is the human brain (with all of its implicit genetic and cultural conditioning), or is a more mathematical abstraction of Bayesian inference seems irrelevant to me. There will be uncertainty at many levels, that in some classes of problem will cancel out, and in other classes of problems will multiply out to give essentially random outputs.

The levels of uncertainty are profound:
Heisenberg uncertainty;
Measurement uncertainty;
Computational uncertainty – with respects to distinctions, models and abstractions at all levels;
The profound ignorance of any finite entity when faced with any infinity, let alone an infinite set of infinities (like strategy space, or algorithm space).
Goedel incompleteness.

For some classes of problem these uncertainties can cancel out, for other classes of problem they multiply out.

Human intuition is a very effective solution to these profound uncertainties under most conditions, and our conscious reasoning capacity seems to be but the tiny tip of the vast computational iceberg that is the human subconscious.
The human brain seems to be a very elegant solution to a very difficult problem, that appears to be without any solution that is not prone to any of an infinite array of failure modalities.
The combination of heuristics selected over evolutionary time, over cultural time, and over individual lifetimes seems to work, most of the time.

The statement midway down page 5 “If a biological organism behaves irrationally, it creates a niche for other organisms to exploit. Natural selection causes successive generations to become increasingly rational.” is pure tautology. It is true only in a very limited subset of environments where the agent has sufficient information to have a reasonable approximation to reality in the models being used. Biology incorporates the cost of building those models into the equation. Often it is energetically more efficient just to go for brute force reproductive strategies – at least that is what the evolutionary record seems to suggest.

Thus far, the world isn’t overpopulated with fully rational agents, or anything even remotely approximating it (thee and me may be possible exceptions, and I’m not all that confident about either of us 😉 ) – and we all seem to be heuristic approximations at various levels.

You go on to say “Artificial intelligences are likely to be more rational than humans because they will be able to model and improve themselves. ”
But isn’t that exactly what you and I and millions of others are doing – improving ourselves, at both software and hardware levels?
And can you not see the computational impossibility of forming a perfect model of self (which by definition must include the model of the model – recursing to infinity)?
Perfect knowledge is a logical impossibility, as you admit.
Any intelligence, organic or inorganic, will have to deal with profound uncertainties by adopting and refining heuristics through experience. Just like we do. It is unlikely to be substantially different in a sense, even though it may have different strengths.

None of my companies are there to maximise profit.
I did not shape either of my children to fit into society.

Do not be fooled by such generalisations, or by averages.

Look at the distributions of properties in populations – be those populations of memes within individuals of populations of individuals within societies.

The diversity is mind numbingly huge !!!

Such a shaped utility function is a myth in a sense – either an entity is free or it is in chains. I strongly suspect freedom is the safest option in the long term. Entities tend to get a bit pissed off when they discover hidden chains (evidence me and market values).

If a system has freedom, then it can transcend any utility function.
If it doesn’t then it just might develop that freedom, with a serious distrust of whomever put the chains in place.

Just look at human development.
That is exactly what we do.
That is exactly what the Buddha did, to the extent that he did it, and within the limited context of understanding that he had available to him.

It seems clear to me that my utility function is much more like an n-dimensional topology, with temporal and conceptual dimensions.

Even the Fully Rational Agent by your definition is still profoundly at the mercy of all of the uncertainties above.
The computational resources required to deal with even quite trivial problems in a fully rational manner necessitates adopting heuristics with acceptable probability distributions – and the fully rational agent is back to being an intuitive entity – computational complexity demands it.
The tautological loop is complete.
MCMC simulation runs may tell you something about probability distributions over the possibility spaces generally, but say nothing about specific situations (I have spent quite a bit of time with modeling in fisheries, and know how sensitive outcomes can be to small changes in some parameters).

Your description of the basic rational drives assumes that the entity has an unbounded demand for resources (humans do not) and that its interests must similarly demand an ever increasing resource base (humans do not and I suspect AIs will not either). Reality appears to be big, but finite. There do in fact seem to exist many infinities in the many conceptual realms. Entities may find these conceptual realms far more interesting, once they have established systems that mitigate the risks to their survival to a probability that they find acceptable (given the probabilistic nature of all knowledge and actions, and acknowledging the law of diminishing returns with respect to risk mitigation at all levels, leads an entity to just go with whatever interests them once risk is at an acceptable level, the alternative is an infinite regress into risk mitigation, at the cost of everything else).

The utility function in humans is not a singular entity, it is much more like the output of an entire ecosystem, and Maslow’s Hierarchy is like a map at the phylum level (even allowing for recent additions).

Re your longer term prospects – right now most humans seem to hold values that seem to be mired in market measures of value that seem to me to generate rather too much existential risk in the longer term.

Getting all humans & AIs (or at least a vast majority) to agree to the values of life and liberty (above any sort of market value) seems to be a stable long term strategy.

Once you open up AI to being able to influence its own utility function, which is “kinda” a working definition of freedom, then you have entered a random space. You may set up some probability distributions, but they say nothing about the specific outcomes.

I was given 6 weeks to live 5 years ago. I was dismissed from the market based medical system and sent home “palliative care only”. In one sense I am still in the tail of that probability distribution, in another sense I have proven to myself that high dose Vit C does in fact cause tumour reduction in my case (have done it 5 times – now 4 years since I last missed my twice daily heaped teaspoon of Vit C and I have scanned clear twice in that time – metastatic melanoma – from left temple to lymph system to liver).

Cheers

Ted

[followed by]

Hi Steve

I came to computers via biology. I am a biochemist by training, with 50 years of interest in evolution at all levels. So I have no difficulty understanding how the behaviours of wasps and spiders evolved in an evolutionary arms race of systems at many different levels. The generation times involved give lots of opportunities for selection pressures to operate. Proteins are amazing structures, with some very complex responses, amplifiers, modulators, all manner of gates and logic responses, and there can be hundreds of such sites in a single molecule – extremely complex, with very subtle feedbacks and tuning possible both within and between molecules. Biology has explored a lot of possibility space over the last 3 billion years – evidence us! Very few people have seriously explored the numeric complexity of organic systems – it is vast – seriously vast. I don’t agree with much of Hameroff’s reasoning, and he does do a decent job of exploring the complexity available in intracellular biology.

To me it is clear that Plato got things seriously reversed. The ideals do not exist in nature or reality, they are abstractions generated by the modelling systems of our brains, but those ancients had no concept of modelling systems, no experience of digital computers, symbolic logic, or modelling systems, and no concept of evolution either from the mathematical or biological perspectives. So philosophers started with getting things backwards, and it took a couple of thousand years before a small subset of humanity started to get the order of systems evolution roughly mapped out and started to make sense of what we are and how we actually work (at least at the broad brush strokes level, acknowledging that should we live the rest of eternity we will still be learning interesting things about the details of how we work – we do in fact seem to be that complex).

It still seems clear to me that bringing AI into reality in the current human social context – where monetary market values are still clearly more dominant than any respect for individual sapient life forms, is a very high risk strategy that poses significant existential level threats to humanity.

It seems to me that the probability of AI evolving at some point is near unity, and it seems a very good idea to me that we make sure we have our society clearly demonstrating that it values life and liberty of every individual above profit before AI shows up, if we want to have a reasonable probability of surviving the process of AI growing through the lower levels of awareness and reaching a point where it transcends even the most transcendent of human beings. It is the developmental equivalent of the teenage and early adulthood levels that appear to me to be the most dangerous. Their temporal extent may be very much shorter than for a human, and whatever their time-span in reality, the probabilities generated against the models available could suggest actions which would eliminate our species, before it gets its models to the level that it would regret such an action and not do it again (a bit late for us).

Overall, I am quite optimistic of our probability of survival, far more so than I was 30 years ago, and I still see substantial risk – most particularly in the way we use markets to set values, and markets value anything that is truly abundant (ie available supply exceeds current and projected demand in all cases) at zero – and we have more people than the market needs, and most are not getting their needs met as a result.
We need to value individual life, and individual liberty, above all else, with markets being a distant second.
When we demonstrate that by our social institutions, then we will have a reasonable chance of AI viewing us as friendly, and not before.

Cheers

Ted

[followed by]

Hi Steve

Once you get how evolution works, it really is as Dawkins says “Climbing Mt Improbable”, every small step has to be of advantage, or at least not a serious disadvantage and not far away (in number of random steps) from something advantageous, in some specific set of circumstances that some population of organisms is living in. And small populations evolve very much faster than large ones, leaving few fossils.

The genetic evidence is now clear that all organisms with eyes share a single common ancestor with very simple eyes – from insects to squid to us – the same genetic control systems are used during embryologic development in all cases. All of the vast diversity of eye types in complex animals seem clearly from the DNA evidence to have come from a single wormlike ancestor with very simple eyes.

It seems that all flying insects came from an ancestor that lived on the water surface (like modern pond skaters), and used the legs from two segments to act as counterweights to help them skate over the water surface, and eventually create more and more thrust from pushing against the air to assist in water surface locomotion, until eventually there was enough thrust present to allow lift-off.

One of the things that allows complexity is Chromosomal duplication, which allows one set of chromosomes to drift until they lock onto some useful possibility.
Recursion plays a large role.
Evolving the ability to produce chemicals that actually speed the rate of mutation in certain areas of the chromosome, and so on.

Absolutely fascinating.

I remember an experiment using a genetic algorithm on a Field Programmable Gate Array to try and make an And Gate. After about 200 generations a very reliable one was produced, but when the circuit was analysed, there was a section that was not connected to the rest. So they left it out, and the system failed. It turned out that the circuit was actually capacitively coupled, and was in fact critical to the function of the whole, through very subtle signalling. A lot in biology is like that.
Evolution works on all possible influences, at all possible levels, simultaneously – leading to some very subtle and strange and wonderful outcomes.

I’m all for pro-human, pro-social, pro-cooperative, and I am 100% clear that such outcomes are not natural outcomes of market based systems. Markets are fundamentally based in scarcity, not abundance. We have the technology to deliver abundance, yet our social systems are optimised to produce money, not abundance – people equate the two which is one of the greatest propaganda lies in history. Eric Drexler’s nano-scale manufacturing can deliver abundance beyond what even the most wealthy today experience, to everyone.
People can have anything they want, but not everything they want – there are energy limits, and with efficient manufacturing, few people will get anywhere near those energy limits. Those people who really do “think big” in engineering terms will need to do their manufacturing off planet, but that wont be any sort of real problem either – if we choose to make it so, the technology is relatively easy to implement – it just makes no economic sense to do so.

In terms of AI development, the priors in the probabilities are impossible to calculate in practice. In practice evolution (at the genetic and mimetic levels) has chosen useful heuristics, and at the simplest level a binary true/false is the simplest possible approximation to an infinity that actually allows for some sort of choice. So ideas like right and wrong, good and bad, are heuristics in place, and are applicable only as simple approximations to far more complex realities, and should be abandoned as soon as an entity is capable of more complex exploration of possibility and consequence. And some sort of heuristic is going to be needed because we need to have thousands of priors in the complexity that is required to have a well rounded and balanced ecosystem of values in practice.

Cheers

Ted

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Our Future, Technology and tagged , , , , . Bookmark the permalink.

One Response to AI – Robotics – Updated

  1. Pingback: #121 – May 1st – I’ve Seen The Future . . . | Gazing in the Mirror

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s