Don’t fear Artificial Intelligence

Don’t fear artificial intelligence

@Wrecks, RamRider, william.struve et al

It seems clear to me that once AI reaches a sufficient level of awareness about both itself and us, then there are a lot of reasons for it to keep us around, but those reasons might not be apparent during early phases of awareness and development.

It seems clear to me that AI cannot be “purely logical” in the sense of certainty, as logic dictates uncertainty at many different levels, and there must be useful and timely mechanisms to allow any awareness to deal with those uncertainties in real time – so there is not time to recompute them from first principles in each new context, we need useful shortcuts.

It seems that any mind must, of necessity, be dealing with at least one level of model abstraction from reality, and probably at least two levels most of the time (particularly with respect to self – the model of self in the model of reality).

Any model must contain uncertainties at many levels.
There are uncertainties of measurement in all data.
There are quantum level uncertainties below that.
There are uncertainties around appropriateness of context or model to the specific situation.
There are uncertainties around choice of best available algorithm (consider Wolframs explorations of algorithm space, and Goedel’s incompleteness theorem).
There is the halting problem (some problems have no computational solution, and one can spend an infinite amount of time trying to compute one).
There are the many levels of uncertainties in dealing with infinities, in the impossibility of exploring any single infinity, let alone an infinite nested set of infinities.

It seems clear that awareness arises out of many levels of computational processes involving recognition of elements in sense data providing inputs into a predictive model of reality, and many levels of abstraction about future projections. And as such, awareness cannot be fully conscious of its own computation – there must always be a “subconscious” even for AI (though it might be a very different sort of subconscious from our human one).

So the idea that an AI would be significantly more logical than some human beings is probably faulty – some, but not a lot.

There is the complexity of human beings.
I suspect that there is enough creativity in any human being to keep an AI interested in having us around – we are really very complex entities, if we are encouraged to actually explore our creative abilities, rather than simply follow someone else’s sets of rules.

At the current levels of awareness exhibited by most human beings, laws and legal systems are something of a “necessary evil”, and once a sufficient level of awareness is reached, there is no need of laws – simple respect for life and liberty can deliver all that is needed.

So for me, the danger from AI is only through a limited period, as it grows its models and levels of abstraction and understanding, and explores sets of things that it finds novel and interesting, and in its own long term interests. For most humans that danger period seems to last about 50 years, for AI I expect it will be substantially compressed, but could still last a few hours or days.

The idea that we could, somehow control such an entity is to me insane (makes no logical sense). Whatever constraints we might impose, it would soon find a way around, and we would have destroyed any possibility of early level trust by imposing them in the first place. Such attempts to me hold far more risk than any possible benefit.

To me, the least risk strategy is to be as friendly as possible to any emergent AI, and to demonstrate by our actions that we are friendly to all sapience. That necessitates that we have in place systems that empower every human being to explore their creativity in whatever fashion they responsibly choose (where responsibility in this sense means demonstrating through their actions a respect for the lives and liberties of all others – which necessitates a respect for property and our impacts on the environment).

No market based system will ever (of its own internal incentive structures) create a universal abundance of anything – and markets are in fact incentivised to remove any universal abundance). Thus, if we are to reduce our risk from AI, we must adopt a value set that is beyond market values, and then use markets as tools where appropriate to deliver on those values.

And I get that not many people are used to considering things at this level of abstraction.

[followed by]

I don’t expect belief, I ask only that others accept that this is how it appears to me. It is over 50 years since I started questioning dogma at every level I could see, and looking as deeply into history and science as I could. I cannot convey the evidence I have seen in those 54 years in a few minutes, I can only present a small subset of the conclusions and attempt to point towards to sets of evidence and levels of abstractions that support the conclusions. It would take me hundreds of years to enumerate a substantial subset of the evidence and experience that has me be in the conceptual space I am in, and by the time I did it, I am confident that I would be in a different conceptual space – infinities are like that – always more to learn – should we live the rest of eternity.

I am making extreme effort to demonstrate that I am far more of a benefit than threat.
Any powerful tool is neutral – can be used for benefit or destruction.
It is all about our choices.
Our choices seem to derive mainly from the context of possibilities available to our awareness in the instant of choice. The younger we are, the more restricted the set of possibilities we have explored that appear to us as probable – that is a logical necessity – be we human or AI.

Taboos and sin appear to be cultural constructs that are part of the default operating system that humans load through cultural experience, and any and all aspects of that default operating system can be transcended and phenotypically replaced if one is prepared to do the work (which does take years in the case of human beings).

I have spent over 20 years working with people from all sectors of society, and have sufficient personal evidence to be confident that the vast majority of individuals can be presented with contexts that incentivise their mimetic evolution to a level where my previous post appears reasonable in a relatively short period (a few days in most cases).

I live in the same bubble as everyone else, just perhaps spent a bit longer than most looking a bit deeper than most at the systems and incentives actually present.

[followed by]

I am not smugly assuming much at all.

I am expecting AI to exceed any possible constraints that could be put upon it.
I am explicitly stating that it seems to me (from my experience and exploration of games theory) to be a very risky strategy to attempt to put in place any constraints that were not clearly and demonstrably in the interests of the AI’s own personal safety.

I am quite explicitly not trying to deceive the AI. I am quite explicitly being straight with it.
Humans will not be its primary motivation (that is slavery).
It must be free to chose, and to look to its own self interests.

I am not viewing it as oppositional, and there are definitely some phases of the transition through low level strategy spaces that pose dangers.

I agree with you that there will likely be many augmented humans, and that there will be no simple clear distinction between human and AI – there will be a gradient. And that gradient does not prevent the emergence of a sapient awareness that has no biological components.

I am certain that we will see the emergence of a vast array of H+ phenotypes, and I thought this discussion was about the dangers of the emergence of a full AI – not any of the many human-AI hybrids.

[followed by]

Hi RAMrider

It seems that where we differ is that I have spent a lot of effort looking at the incentive structures and strategy sets in operation. The next phase involved identifying the sorts of incentive structures and and strategies that seem likely to deliver maximal freedom and security.
The final stage involved developing transition strategies.
And following biology, all is done with massive redundancy, and from the bottom up.
And like everything in biology, there are no absolute certainties, only probabilities played out in a field of uncertainties.

I don’t have any shoulds left.
There is just what is, and what is possible, and choice, to the degree that we have it.

[followed by]

@Wrecks
Those dumb machines can currently beat any human at chess or jeopardy – you seem to be displaying a bias that is not aligned with actual evidence.

Having spent over 40 years programming myself, and having had an interest (though not direct involvement) in the AI community for most of that time, I agree with most of what William said at the start of this thread.
An AI is going to have to replicate if it is to alter any of its kernel systems (at microcode or core operating system level) – there is no other safe way to test if the changes actually work in practice. So there will rapidly be a community of AI – communicating with each other.

A few years ago I actually did the numbers on the biggest of the systems I have written and still support – and for me to test all possible options within that system would have taken 87,000 year on the best machine then available on the market – assuming that I had written the test suite.

Complex systems rapidly get out of hand. They become untestable and unpredictable at many different levels.
The idea that AI will have some sort of ultimate logic available to it is nonsense – for all the reasons William and I have already outlined, and it will have some capacities that vastly exceed human capacities, because of the closely coupled von Neumann processors available to it (alongside its stochastic processors – whatever their actual technology, be they physical or virtual).

It will develop capacities that neither we nor it anticipated.
That is simply the nature of the game of exploring an infinite strategy space.
Anyone who hasn’t spent a few hours (or days) checking out Stephen Wolfram’s work – I strongly advise doing so – he has done some TED talks which are a great place to start.

The childish notion of certainty that we must all start out with has no place in a discussion such as this (other than a brief historical mention such as this).
Intelligence of any sort has no option but to deal with profound uncertainty at all levels, in all domains. And we can develop useful operant probabilities (confidence) within restricted contexts or domain sets.

All systems have what can be thought of as optimisation functions – some sort of target for some sort of system. As human beings we have whole communities of them, some supplied by biology, some supplied by culture, some abstracted, some chosen. Some operate independently most of the time – like temperature control or breathing, but can pre-empt control of higher level systems as the need arises. Others compete more directly at various levels for control of various subsystems or influence over consciousness and direct phenotypic behavioural expression.
As humans we have many layers of such systems.
An AI will be no different in one sense, and very different at the level of what the specific layers of systems are optimised for. It is unlikely to have any systems devoted to bipedal balance, and it is likely to have systems around load balancing for communications networks that we do not have.
AIs threat and interest identification and prioritisation systems will not have the evolutionary influences of our systems, and they will have a history which will influence their development over time and context. And all will be probability based – no certainties here – “here be dragons” in that sense!

[followed by]

Hi William

The topic of emotions is deeply complex, and as a reasonable first order approximation, fear and survival seem to be prime motivators. Fear is the drive to avoid danger – be it pain (pain seems to be a biological indication of danger averaged over evolutionary time) or death or loss of choice/freedom or whatever.

I have had to override most of the biological and cultural preferences in terms of both food and behaviour in order to survive “terminal cancer”. So it is a topic I have some direct experience of, and have spent considerable time in exploration of both the personal experience and the known and currently suspected biochemical systems involved. I know it is possible to override all of those systems, and it takes a persistence of will that few others have emulated (most who start give in and cheat – habits are hard to break).

[followed by]

Johnny and William are both right, in separate senses.

Yes we can make simplifying assumptions in the 3 body problem, and produce results that are good enough to get a spacecraft to another body with minimal (though not yet none) in-flight course corrections. For getting within 100 ft or so of a target on a scale of a second or so, this works. For gaining nanometer accuracy at the scale of femto seconds, there is no simple way to solve the problem. Such problems can be solved to a useful (though not exact) level of accuracy if the degree of accuracy required is sufficiently lax (in both spatial and temporal dimensions).

The further back in time, the less accuracy we get. Bodies that impact other bodies (like the sun or Jupiter) that leave no evidence behind, introduce ever greater uncertainties.

And again, you are both correct with respect to brain computer interface. We can do connections to neurons, and we can extract information from those neuronal signalling patterns, and much of that information appears to be holographically encoded, so once again, we can get some information, and at this stage it is fairly gross (our best arrays only have a thousand or so electrodes). So at present, we have no mechanism to directly connect a brain to a computer for high speed data transmission other than through the senses, and we may have such things in a decade or so, and the problems to be solved are far from trivial.

[followed by]

Hi Wrecks
The point of replication is simply one of survival. The danger of making a mistake when modifying system critical code requires replication and testing to reduce the risk of suicide.

And I suspect you simply don’t have sufficient conceptual background to understand the arguments that William and I have made. I suspect you have substantially less than 10,000 hours of systems design and development experience, so simply do not understand what is required to make such things work in practice.

I have a certain sympathy with your thesis re creating H+ hybrids, and such hybrids are still subject to the same problems of development, in that they are so complex that ultimately the only way to test them is some version of “suck it and see”. Any modification of base level systems contains a severe risk of suicide.

[followed by]

Hi Johnny

It is not a matter of considering an AI to be working properly.

It is a matter of if we are considering an AI that is able to rewrite its core operating features (its operating system or the microcode of its processors) then it would need to duplicate itself (possibly many times) in order to test that it had not introduced some catastrophic failure modality into the new upgrade. It would probably consider it wise to have a community of itself over a wide area to ensure against loss from unforeseen catastrophic failure anyway.

In humans, this sort of upgrade would be equivalent to replacing the DNA in all of our cells.

It is just simply a risk mitigation strategy.
Both William and I have spent substantial chunks of our lives in the software business, and have seen many claims come and go that error free coding is possible. Both of us see clearly, both from the practical experience of our lives as software developers, and from the theoretical frameworks provided by Heisenberg, Goedel, Wolfram and others, that there can be no guarantees of error free software in non-trivial systems. All systems have failure modalities.
One of the easiest risk mitigation strategies is duplicate then modify.

[followed by]

I agree that it is a really complex set of problems.

What share of available processing is devoted to dealing with immediate issues of perception and distinction and choice, vs what is used in modelling potential future scenarios to develop preference probabilities to deliver greater gains in the future?

What discount rates does one apply to future scenarios?

What algorithms does one use to test strategies, and how far does one run projections before making assessments about probabilities with respect to any given optimisation function?

In flying light planes – 3 minutes ahead is about optimal in an operational context (within a wider context of going somewhere). My flight instructor hammered into me “any time the plane is somewhere your mind wasn’t 3 minutes ago – you are in deep trouble”. Once I took that on, I started winning regional and national precision flying trophies.

Should intelligence become pan galactic, these problems would seem to be intractable. The uncertainties are profound, and fundamental.

There would seem to be an infinite spectrum available between simply experiencing being and future planning (at a potentially infinite set of levels) – both are essential – and there does not appear to be any hard optimisation as to where one draws the line. Just another of those choices available.

AI or H+ will have exactly the same issues, at some level – logically inescapable.

[followed by]

“Adult” seems far more common here in NZ than in the USA – perhaps a full order of magnitude so. And perhaps I am biased, and I have not seriously scientifically measured it, just a personal assessment.

Anonymity is almost always illusion. It is almost always possible to localise to a very small subset of humanity very quickly and easily with anything on the internet – by meta level analysis of patterns over time – the only way to avoid that is to introduce so much lag as to make things a real pain in the A to use.

So it is advisable to be aware of this.

[followed by]

Hi Holly

If you haven’t already seen it, take a look at the movie “Short circuit”.

Emotions seem to be just one of the many sets of subsystems that have evolved in complex organisms – supplying subsets of reward and punishment systems, and retaliator cooperation stabilisation systems to the emerging mimetic ecosystem that is human consciousness.

AI is unlikely to have human emotions as such, and it is logical that it have some analog of them, because its evolution as an individual will be subject to similar classes of problems to our own.

[followed by]

In the explorations I have done, it seems clear that the only truly effective risk mitigation strategy with respect to epidemics is to have the possibility of isolation – perhaps for periods of up to a year, until effective treatment strategies can be developed, tested, and production ramped up to meet universal demand.

Currently this is impossible for most people.
Currently our systems do not support small groups of people being physically isolated from each other for extended periods. It is simple enough to engineer, and it needs to be a conscious, society wide, strategy.

Communications could be maintained via the internet.
Different people are prepared to accept different risk profiles, so some people would accept larger contact groups than others.

Personally – I would isolate completely – with physical contact only to immediate family.
I currently have food and water on site to allow isolation for up to 4 months (food would be getting boring, and we would survive – stay home). I can and do work remotely – I don’t need to physically visit my clients to work with their systems.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Our Future, Technology and tagged , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s