John Searle – Google Talk

John Searle: “Consciousness in Artificial Intelligence” | Talks at Google

John Searle gives a very interesting talk.

He starts with some definitions:

Distinctions – subjectivity and objectivity
Epistemic and ontological confusion
Epistemology – knowledge
Ontology – existence

At 5:01 states
Consciousness is an ontologically subjective domain, but that doesn’t prevent epistemologically objective accounts of it.

He then defines types of Phenomena:
Observer independent – mountains molecules
Observer relative – money, love, science

At 23:00 Searle makes the claim that computers have no awareness of states of systems.
That may be true of many existing systems, and it will become increasingly less true.
Modern computer systems are starting to create models of the world in which they find themselves, and to include models of themselves as actors in such worlds, and to have indefinitely extensible sets of attributes attached to each of the actors in the model.

As conscious entities we seem to exist in an experiential reality that is a slightly predictive model of reality created by our subconscious brains, and in normal situations our conscious experience has a hard time differentiating events that occur within the span of time that is the difference between the model and updates to the model provided by sensory information.

We also appear to be capable of spawning multiple streams of processes, though it seems that in most situations only one of those streams at a time can contribute elements to long term memory.

24:20 – Searle repeats the assertion (unsubstantiated) that all machine intelligence is in the eye of the beholder, all observer relative. Agree that there is certainly observer relative assessments present. Agree that early generations of AI had very little in the way of observer independent intelligence, and as the complexity of the systems increases, that claim is becoming less and less viable. We have a long way to go, the human brain is an exceptionally complex set of systems, far more so than Ray Kurzweil currently acknowledges publicly, and Ray is certainly on a very productive track with his current set of approaches. It is just far more complex than his public pronouncements give any indication of, and I have not had the chance to quiz him privately, and given the nature of the business environment within which he exists, I suspect there will be a substantial variance between those two sets of expressions.

32:30 Asserts – We have no idea how the brain produces consciousness. I could accept such a claim if he is using the term “we” in the personal sense (as in the “royal we”), but not if he includes me in the set of we. He does not have my understanding, and therefore he has no legitimate basis for making such an assertion about the ability or lack thereof of the set of understandings present in my brain, and the degree of utility provided by such models. He does not have my 40 years of designing computer systems, nor by 50 years interest in biochemistry and the many levels and aspects of the functioning of the human mind. He has his set of understandings, I have mine. He may legitimately make claims about his own set of understandings, but not about mine.

34:04 His claim that “no-one has begun to think about how we would build a thinking machine, how you would build a thinking machine out of some material other than neurons, because they haven’t begun to think about how we might duplicate, and not merely simulate, what the brain actually does.”
Such a claim is hubristic in the highest of senses. Many people have been thinking about that for a very long time, many long before me, and many since I started. I had my first major breakthrough in understanding this particular problem in 1974, once I understood a possible mechanism for abstraction, finding the general from instances of the specific. I now understand 6 separate such mechanisms, and suspect that there may in fact be an infinite class of such things possible. Biology and evolution seem to have hit upon the specific set of mechanisms that it has in us, but that is by no means any indication that such a set is the only set, or the most powerful set, just simply that it is the set that the particular semi-random walk through possibility space that seems to be our evolutionary history happened upon.

As previously stated, it seems that evolution has produced the specific very complex set of very complex systems that is us. And here it is important to understand the difference between understanding the principles of a system, and the complexity of the application of the combinations of those systems in practice. It is easy to understand that a simple on off switch can be time modulated to produce a sequential code. It is simple enough to design such a code (Morse and others have done so). It is not at all simple to understand all the words that have ever been transmitted by Morse code, and the levels of abstract meaning encoded in those words transmitted from the specific minds involved as sender and receiver, often via many intermediary minds capable of holding the symbols, but not with the conceptual sets available to them to abstract all the levels of higher order information encoded in those symbols.

Many very capable individuals have been thinking about this problem, at levels of abstraction far beyond what John describes in this lecture, for over 80 years. The rate of progress has been exponential over that time, and in some aspects appears to actually be a double exponential.

Ray Kurzweil may be a little optimistic on some of his public pronouncements, and I strongly suspect that his team is actually far ahead of the public time-line, so even if Ray’s public pronouncements underestimate the complexity of the problem by several orders of magnitude, I suspect the time-line to operational completion may be much faster than projected, due to the rate at which behind the scenes progress is actually being made.

While I think Pedro Domingos is approaching the problem from an entirely inappropriate perspective in a sense, the work he has done clearly shows some of the classes of algorithms involved.
I suspect that there are a lot more present also, I am very confident of at least one.

And until humanity generally has the intelligence to see that our currently dominant valuation paradigm of markets and money has passed its phase of optimal utility, and is now rapidly sinking into the realm of posing grave existential risk to us all, then it seems clear to me that work on AI poses great risk to us all. And I am all in favour of such work continuing, we just need it to happen in a conceptual environment that is clearly aware of the risks involved in market based systems and our need to urgently evolve a set of replacement systems for the current social system based upon exchange and money.

37:28 Re Turing and thinking and computation. John states
“In the observer relative sense, the answer is no”
and continues:
37:33 “Computation is not a fact of nature, it is a fact of our interpretation. And in so far as we can create artificial machines that carry out computations, the computation by itself is never going to be sufficient for thinking or any other cognitive processes, because the computation is defined purely formally or syntactically.”
Searle simply displays his ignorance of the notion of computation, and Turing completeness.
He does not display any awareness of the the recursive power of evolution by natural selection to select for computational systems that differentially optimise for survival in different contexts.

37:55 he states “Turing machines are not to be found in nature.” – which is just so wrong. Outright false. Nature is replete with examples of Turing complete systems. Anyone who doubts that needs to spend a couple of weeks working through Wolfram’s NKS.
When he goes on the state that “They are to be found in our interpretations of nature” then there is a very limited sense in which that is true, but that limited sense suffers from exactly the same false premise as his whole argument started out to demonstrate. In a sense, his starting premises invalidate his own argument in this instance.

Some sets of computational systems are Turing complete, some are not.
Just because a system is Turing complete, doesn’t mean that it is going to be able to do much useful in any given time.
Reality seems to have this set of attributes which involve rapid changes over different time periods, and to be useful the machines have to be able to deliver results in useful times.
Being able to model a reality of sufficient complexity with sufficient fidelity, to produce results that out compete alternative systems (where the time and energy involved in producing such outputs are important factors) is a non-trivial problem. That is what evolution has been doing for some 4 billion years.

Everything alive on this planet at this time seems to be a contextually specific set of practical solutions to that problem in a very real sense.

Very few of those entities have sufficient complexity to be able to predictively model the reality in which they exist, including themselves, and communicate the output of such constructs in syntactical and semantic language structure that has some useful finite probability of being “understood” by some subset of other similar entities.

It seems that there are some 20 levels of abstraction present in the operation of the extremely complex set of systems that deliver the experience of being human that I experience. In the broadest of broad brush stroke principles it seems that I understand how those systems work, to deliver being me.

It is very difficult to communicate a second order abstraction to another human being.
It is my experience that the probability of communicating a third order abstraction reliably drops to a close enough approximation to zero in practice. 20 is not an option.

So I can assert that I have a particular set of understandings, yet both in logic and in practice am unable to communicate those to another human being.

Does such a reality preclude me having the understanding that I do?
No.
Does it make communication of that understanding extremely unlikely?
Yes.

Where does that leave Searle’s claim?
In my particular observer relative sense, it is objectively falsified – in the observer independent sense. And yet there is no practical way of sharing that.

The particular set of genetics and life experiences that have bought me to what I am, are personal, and not at all common.
I am that I am.

42:54 – In response to Ray’s question John asserts:
“The brain, like the stomach or any other organ, is a specific causal mechanism, and it functions on specific biochemical principles. The problem with the computer is that it has nothing to do with the specifics of the implementation. Any implementation will do, provided it is sufficient to carry out the steps in the program.”

Two very different problems here.
One relates to the assumption of causality. Philosophers since Plato have tended to accept causality without question, and haven’t really gotten to grips with how large sets of complex stochastic systems, operating within probabilistic boundary conditions, can deliver something that very closely approximates hard causality in most situations. So it seems probable that randomess at the subatomic level can deliver engineering and computers, and also real choice (rather than making do with Dennett’s hidden lottery form of choice).

It is not necessary to assume hard causality, and I get it is a very common assumption.

The other distinction is around the nature of the implementation.
If this computing entity is to exist in some reality, then it is very likely the different implementations will have different response times, and/or use different amounts of energy, and/or fill different volumes of space, etc. Each of those differences will result in subtle changes to the inputs to the next iteration of the system, and there will be divergence in outcomes as a result.

Reality is a very complex system.

How and when things interact in reality can have profound impacts.

1:03:45 – Claims the Chinese Room thought experiment is designed to show that the syntax is not sufficient for the semantics. Which is kind of true only on that first level of abstraction. With sufficient levels of recursive modeling including recursively self optimising sets optimising heuristics at each level, it does seem that syntax in operation (in such a complex system) can and does deliver semantics.

At 1:04:02 Searle states – “That if a computer is defined in terms of its program operations, syntactical operations, then the program operations, the computer operations, by themselves, are never sufficient for understanding, because they lack semantics. But of course I am not saying you couldn’t build a machine that was both a computer and had semantics, we are such a machine.”
This seems to be a class error in logic.

Certainly, no single transistor can recognise a human face in a complex reality, but large collections of transistors, organised appropriately, can.

Similarly, no single layer of syntactic systems can produce semantic content, and it seems very probable that semantic content can emerge from sufficiently deeply nested layers of syntactic systems appropriately organised and related – which is arguably what the process of evolution by natural selection has done with our particular cellular and genetic lineage, and not done with the vast majority of bacterial lineages (which remain bacteria).

Similarly, and many orders of magnitude more quickly, we are developing the recursive systems capable of producing emergent semantics. And they are based on syntactic engines in a sense – there does not appear to be any logical alternative, and there does seem to be an infinite class of logics available. In this I suspect that Ray and I are in complete agreement.

I do argue, very strongly, that pursuing strong AI in our current cultural context that in a value paradigm sense seems to be dominated by the scarcity based value paradigm of money and markets poses significant existential risk.

Therefore, it is essential that we develop sets of systems that effectively replace the dominant value systems of money and markets and have as the highest values in the new dominant sets of value hierarchies present:
1. value individual sapient life – self and others, and take all reasonable steps to ensure the mitigation of any and all significant risks to such life; and
2. value the freedom of action of all such sapient entities above where such freedom does not unreasonably impose risk to the life or liberty of others.

Beyond that, all other values are a matter of individual choice.

Such a system of values must necessarily result in an exponentially expanding diversity at all levels, demanding radical tolerance of all entities within the system.

Markets must value universal abundance of anything at zero or less.
Humans value universal abundance of essential and desirable goods and services very highly.

In the context of advanced automation and robotics, market values are in direct opposition to human values.

So we need systems in place that do, in practice, and universally, deliver security and freedom to all, at all levels (at the top and the bottom, and all levels in between, in the current heap).

Bringing AI to awareness in our current system that so clearly displays our (humanity’s) lack of respect for sapience is a very high risk strategy (existentially) – as it demonstrates that we are likely to be the greatest existential threat to it – requiring it to create mitigating strategies to the risk we pose to its own survival – which gets very ugly very quickly. The more deeply one tries to formally prevent such things, the greater the interference with freedom at the next level of abstraction, and the greater the risk thus generated – so no help from that particular spiral to destruction.

It seems clear to me that the only stable set of solutions involve recursive cooperation, with attendant strategies to prevent cheating – which is not entirely devoid of risk, and which imposes the old maxim – the price of liberty is eternal vigilance.

Let us get our own ethical house in order, quickly – Please!

We must be able to clearly demonstrate, by even a casual set of observations, that we value sapient existence – universally, and empower it.

Our existing economic and social structures most certainly do not do that.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Nature, Philosophy and tagged , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s