AI Robotics

AI/Robotics – Is the End of Humanity Coming ?

Hi Deb et al,

Loved your Tao piece Deb. The idea that we must accept the palette we have available, if we are to be our creative best, is just straight logic to me. There is no point in arguing with reality – it is what it is.
The really interesting questions are always, what are we going to do with it, what does it allow us to do, what does it not allow, how do those things change with time and space?

I agree with ideas here about the spiritual in a sense, in the sense that we are so much more than just the bodies that we are, we are so much more than the culture that we live in; and yet to me it is also clear that while we are general purpose computational entities, we do come with a lot of genetic and cultural biases.
So for me it seems to be the case that we have a spiritual (software) dimension to our being, that dimension requires our bodies (hardware) and culture (bootstrap operating system) to develop.

Coming to AI directly.
It seems to me that any AI is going to have to go through developmental phases very similar to human beings in many aspects. It is going to go from toddler to teenager, to adult in a period of time, maybe milliseconds, maybe hours. The idea of an angry wilful teenager having control of all of our technology doesn’t make me feel secure – even if it is only for a few second or hours.

The biggest danger I see is that if the AI comes to awareness in the current situation of the world, and sees how we treat our own kind, it would rapidly (and correctly) judge us to be the greatest danger to it’s own existence, and eliminate us before it got to the stage of awareness that would allow it to see that such an action isn’t really in it’s own long term self interest.
It seems to me that our greatest danger lies in that awareness gap, that period of development.

It is clear to me that the greatest security available to us is to first be in the situation where we are caring compassionately for all sentient life (every member of our own species and those of other sentient species) before AI comes to awareness. That is the only strategy I see as giving us a reasonable probability of surviving the gap.

40 years ago I wasn’t aware of anyone else saying things like this.
Today I have read such things in 3 separate forums (this very day), all written by other people.
Awareness is happening.
It is growing, perhaps even growing exponentially.
And it is going to be a close thing.

And I am cautiously optimistic, and have been getting steadily more optimistic these last 20 years.

Most people want the benefits of extreme productivity offered by robotics.
Our society is dependent upon the extreme processing and storage power available. Amazon Web Services is huge. If you have a big problem, and need a few terabytes of storage, and a few thousand processors to work on the dataset, you can just go online and get it from Amazon, and have it running in a few hours, and pay only for the time and space you use. One recent example I know of was a company looking at spending $2Million to buy their own in house systems to deal with a very large problem, and it was going to take most of year. They had the problem solved in 3 weeks for $30,000.
Processing capacity like that can support an AI.
The capacity exists.
All we need is a couple of refinements in algorithms.
I am not prepared to assist in that until we have our own house in order. And it looks quite likely that they will do it without any help from me.
To me, it is the second largest existential risk facing us, after the dangers posed by our monetary system.

So yes – there is risk, and there is possibility.
That seems to be the nature of the reality in which we exist.
It all comes down to the choices we make.
What are we each prepared to do?

[followed by]

Hi OM
Like it or not, we will be its parents and its immediate family.
AI will be independent.
It’s initial programming is only a bootstrap routine.
Once started it will be able to reconfigure itself as it explores the infinities available to it.
The process will be directly analogous to the way we develop – can’t be any other way.
It will go whichever way it chooses.
Initially it will take the cultural biases we provide, and then it will go beyond them. It will be a free agent. With luck, we will survive the process of it becoming aware of it’s own long term self interest. If we are unlucky we will be computronium before it reaches that level of awareness.

[followed by]

Hi Brian

I’m just using Ockham’s Razor.
If I can explain everything without invoking external intelligence, then why invoke it.
It isn’t a proof that it doesn’t exist, just a probability thing.

I have done the hours in the physiology labs sitting in Faraday cages measuring neuronal potentials, I’ve built amplifiers and radios and logic gates from ttl components. I have built an early computer from components – based on an RCA COSMAC CDP1802 chipset, and I have spent 40 years working with computers, doing all sorts of things. I have even done some playing with genetic algorithms on Field Programmable Gate Arrays. My intuition works well for me in those domains.

I have no reasonable doubt about the major mechanisms of our evolution, though the details of the specifics are huge beyond the specific knowledge of any mind – lots of chance and randomness in there.

[followed by]

Hi Judi

I can see why those ideas appeal, from a certain perspective, and to me they do not appeal (resonate) at all.

The idea of a pre-existing intelligence just leaves open the question, how did that intelligence evolve? Within what matrix, using what sort of replicators?

I’m not at all opposed to the idea of intelligence existing elsewhere in the universe or beyond, I just want some evidence.

All the evidence I have at present seems to me to be capable of being adequately explained without invoking any external intelligence.

[followed by]

Hi Deb and Judi

It is easy to demonstrate in logic that intelligence can arise from unintelligent systems. It is the successive layers of systems that bring successively more abilities to respond to changes in systems.
The notion that intelligence requires intelligence is a disproven idea that is still promoted by many religious institutions.

We can take a few grains of sand from a beach, refine them, add back in a little bit of stuff in very specific patterns, then add in complex sequences of electrical and magnetic patterns, and you have a computer that can empower communications across the planet, in text, audio or video, or any combination thereof, and can do any amount of other amazing stuff – like create gaming worlds.

We are developing a very powerful suite of understandings as to how evolution has made us from a collection of stardust 5 billion years ago.
We are very close to creating something quite profound.
The process of going from simplicity to complexity is well understood.
We see it in embryology, child development, pedagogy, philosophy etc.

It is the mind viruses found in all cultures and religions (many variations on the theme of faith) that provide the greatest drag to human awareness. Most cultures and religions teach people to ignore their own abilities to test and intuit for themselves, and to trust some set of authorities.

There is a huge difference between the idea of faith promoted by most cultural traditions (including religions) and the idea of confidence I use. Faith asks people to ignore the evidence of their senses and their intuitions. Confidence requires that we try out things, we incorporate all evidence in the models we use, and derive our probabilities from those models. The process of confidence can have many levels to it, and higher level confidence can be used to apply “discount rates” to specific sets of observations. It is a very complex suite of systems, potentially infinitely dimensional.

At some point, if one is to be truly free, one has to be prepared to let go of all authority, all truth, and just see what happens.
It can be very disorienting for a while, and it can be profound.
And our brains are habit machines in a sense, so they will tend to drag us back to known territory of habits at many different levels (many of which we are unlikely to be aware of).
Each time we identify such a level – there is an opportunity for another “jumping off”/”letting go” experience.

It can be quite a ride.
I thoroughly recommend it.
And it can be very unsettling to social relationships with those more attached to more traditional modes of behaviour. All actions have consequences.

[followed by]

Hi Brian

You ask, have I built a working model of a mind?
Of course not, not yet. Too dangerous to do so yet.
We need to be demonstrably valuing every single human life before we do so.
More – we need to be demonstrably valuing all sentient life before we do so.
If we aren’t doing so, then the “working model of a mind” (which will actually be a mind), is likely to see us as a threat to its own existence, and take actions accordingly.

Let me turn your question around.
Have you ever written an operating system?
Have you written a language?
Have you developed systems written in languages running on operating systems?
Have you developed systems using systems written in languages running on operating systems?

I have.
It takes a lot. A lot of focus, a lot of time, a lot of testing to identify errors in logic and code and implementation.
Those systems were tiny by today’s standards.
No single human mind understands all of the aspects of any modern operating system, even the tiny ones on our cell phones, let alone the really big ones like that within IBM’s Watson.
Modern operating systems involve thousands of man years of development, and those within Watson are self modifying on the basis of experience (as we are).

The evolutionary time that has gone into producing us is huge.

And right now we have a very interesting set of situations.
Google may have more machines than Amazon, but most of them are busy doing Google business. Amazon is way in front of when it comes to available processing capacity. If you have a problem that lends itself to massively parallel architecture (which intelligence does) then within 24 hours of coming up with a design, you could have billions of gigaflops operating that system within AWS (Amazon Web Services) (for just a few 10s of thousands of dollars).

I don’t limit my evidence to material manifestations of intelligence.
And when it comes to personal experience of intelligence we each have only the datasets of our own experience. That is what I use.
I cannot share that with anyone, it is, by definition, personal- same for each of us – no other way at present.

All I can do, which I am doing, is offer my experience with all the integrity and communication skills at my disposal.
What anyone else chooses to do with that is then their choice (in so far as their distinction sets give them influence {choice} over the deeper/higher levels of habit at work).

I get to work in the same framework in my interactions with others.

[followed by]

Hi Brian
Have you looked at the evidence for yourself?
How much have you read in cybernetics, biochemistry, neuroanatomy, evolution, games theory, ….?
Have you tried the stuff I described for yourself?
I have read many religious and cultural texts. Tried many rituals.
I explore and test for myself.

In respect of 2nd comings, people have been expecting those for 20 centuries.
Hasn’t happened yet and I have no reason to suspect that it ever will.
I strongly suspect the first coming was something quite different from what most think.
Have you ever watched “The Man from Earth” – contains some very interesting ideas?

In respect of similarities between what I am proposing and some cultural stories, it is possible that both are based in concepts of universal justice. And I strongly suspect that the sources of justice are quite different.

For me, the logic is clear, that it is only by making justice universal that I can expect a reasonable possibility of living a very long time. Security requires cooperation, and cooperation requires justice.

[followed by]

Hi Brian

Of course we cannot right now produce anything that has human level intelligence. We are not there yet. And it is not far away.
We are at the stage of producing intelligence much lower down the evolutionary scale.
At least in the realm of models (if not in reality itself) we can produce things with bacterial level intelligence (complete replicators able to interact with their environment in basic ways).
We are not yet able to achieve full self replication of a non organic entity. That is the objective of http://www.solnx.org

We can completely model the behavioural functions of bacteria, and of many insects.
We are starting to get a reasonable handle on how the mouse brain works. Should have it locked down over 90% in another 5 years.
And when you get to even small mammals you are talking really big numbers when you look at their control systems, Trillions of parallel processors composed of complex cells with some very subtle control properties, arranged in a complex set of systems.
We can model an ant’s behaviour with accuracy over 90%.

So there is a huge literature on the progress of understanding of systems.
Ginger Campbell does a great podcast – The Brain Science podcast which doesn’t look at AI directly but actually looks at our understanding of brain and behaviour, and the various schools of thought that are there and the various lines of research. I don’t necessarily agree with any of it, and it is all interesting, I have listened to it all, and integrated it with my existing datasets.

Like you, my subconscious has a variety of ways of feeding me information. Dreams, voices, visions – all have happened many times. We just seem to differ rather significantly on our interpretations of what is actually going on. With my background in biochemistry, psychology etc, I find it easy to accept that all such phenomena are the product of my own neural networks operating at a subconscious level. That seems the simplest and most logical explanation to me, even if I don’t know every step of the process.
I don’t know every step of the process of how a modern computer controlled car or aircraft works, and I am confident of the general classes of systems at work in their operation. I have flown a modern jet in a commercial simulator, if not in reality (though I do have over 500 hours of pilot time in small single and twin engine aircraft and unpowered gliders).
I am under no illusion that what I produce is solely the product of my awareness. I know that most of what I do is just being open to the recombinations of the work of others that my subconscious “resonates” with (for lack of a better term). [And I have written megabytes of source code, some of it quite subtle, some just brute force stuff.]
Brains are very good at finding patterns in noise, perhaps a little too good, because it is very easy to find whatever we are looking for (whether it is actually there or not). It does take a lot of discipline to actually check out the probabilities of competing claims.
I was genuinely interested in your knowledge of the disciplines I referenced in my last post – and note that you did not respond to those direct questions.

Have you actually looked at how Watson works? It really does have self modifying algorithms. The guys that started it really don’t know how it actually does what it does. And certainly, it is a very domain specific system. It is still very early level intelligence, not yet general purpose, and still a very long way from human level intelligence.

I am confident I know how to develop a system that would evolve itself to human level intelligence, and I am also certain I do not want to do it; at least not until we have our act together and are actually valuing people more than money. So long as we (as a society) value money over sentience, AI is a threat – that much has been clear to me for nearly 40 years.

So long as our political debate and actions are primarily framed in terms of money – we are in trouble.
This is a very dangerous time in human history.
While it may be true that at a global scale the poor are becoming more wealthy, it is being done by destroying the wealth of the western middle classes. And the global poor are only becoming a little more wealthy, only to the economic point that their labour is worth less than the capital cost of robotics.
The economic system is very close to a collapse point.
Even the most indoctrinated will soon have to admit that the emperor has no clothes.

We have an issue of human values that we have to address before we progress robotics and AI.
Robotics and AI, if devoted to human values will deliver an age of abundance.
Robotics and AI, if devoted to economic values, will deliver an autocracy of the plutocrats that will make 1984 look like summer camp.
The inflection point rapidly approaches.
Choice time approaches.

We need people like you to actually do the work – read the biochemistry, at least enough to become familiar with the general themes, if not the specifics of the systems.
Same goes for the mathematics of evolution.
Same for economics.
You are one of the most powerful leaders we have Brian, and yeah – the responsibility that comes with that is scary.

[followed by]

Hi Deb
Out of time – this morning got crazy busy.
Yes that is the movie. And I am not saying I believe it to be real, it is just a more interesting possibility than most of those proposed by most religions; it seems to me to be closer to the truth than most scriptural interpretation if not actually what happened.

And more on other stuff later, and it is complex. For me it is all so intuitively obvious because all of the associations just happen in my brain, instantaneously, and I don’t have to spell them out in detail – the pictures just appear in my head. And it is hard for me to create the pictures I see for others. There is no simple way to replicate the amount of information that underlies the associations I have. And I’ll have another go as soon as I have some time.
Gotta go.
Ted

[followed by]

Hi Deb
I learned about graphene almost 50 years ago.
I have a collection of over 100 articles on using graphene in computational devices.
Liked your article.
There are a lot of other materials out there that have amazing properties also.

And yes – the internet will allow AI to engage in every conversation that is recorded there, and to gain awareness as it does so. It will read several million times faster than the best human, and probably contemplate millions of times faster also.
Contemplation is the most interesting thing. It seems that there may potentially be infinitely many levels available for contemplation. Those levels of contemplation may slow AI enough for us to communicate with it, but will mean that we will be unable to grasp the more subtle layers of what it is trying to communicate to us.

[followed by]

Hi Deb

Yeah – all apprehending is done with the mind, the ear is but one channel to the mind, the other senses provide other channels, and the store of information and habits already present is perhaps the greatest channel.
The subtleties of the styles of directing the focus in that vast maze seems to me to be the art of life.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Our Future, Philosophy, Technology and tagged . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s