[ 18/2/20 ]

Some very good answers here, and many that fail at multiple levels or arrogance and ignorance.

Some of the great answers that explore some of the really difficult dimensions of the problem are given by Keith Allpress, Anthony Bartoletti, Jon Jermey, Ekin Geçikligün, and others.

In the other camp I place the more popular reply of Andrew Sheehy, and to give him due credit, he does admit in second level response that “maybe I’m wrong”. And asks for a definition of intelligence.

Eliezer Yudkowski has a site called LessWrong 2.0 (http://lesswrong.com), and a few years ago published a book compiled from a series of posts called “Rationality from AI to Zombies” that is very worth reading. It is less wrong that most of the alternatives out there.

The problems of consciousness is deeply complex.

Biology is deeply complex.

The history of the evolution of ideas (particularly ideas like evolution, and atoms) and the current understandings of evolution, of brain structure and the psychology of human development, are instructive if applied in a multi-level recursive sense.

David Snowden published a simplification of the idea of complex systems for managers a few years ago called the Cynefin Framework for the management of complexity which is a useful simplification of an infinitely complex topic.

Where Andrew makes claims that we have “NO IDEA” about consciousness, he may be correct in terms of an association of people he knows well, but not in terms of all groups of people on the planet.

And it is always the case that new ideas emerge slowly.

Eratosthenes determined the circumference of the earth reasonably accurately over 2,300 years ago. Yet ever since there have been a significant fraction of humanity insisting that the world is flat.

All people who come up with better understandings, more accurate simplifications of the world in which we exist, are necessarily misunderstood by the majority who are still operating from a different paradigm.

If the paradigm that they are using is one based in binary notions of true and false, rather that more generalised probability assessments at some greater resolution of infinite diversity than 2; then there is likely to be fundamental rejection of more complex notions – due to there being no mapping between paradigms (a bit like looking for corners on a globe – square maps have corners, globes do not).

So if someone has spent time beginning to get some appreciation of the complexity of the reality we seem to find ourselves in; if one has spent enough time in complex mathematics to have some sort of intuitive feel for both functions over Hilbert space and the probabilities of quantum mechanics; and one has spent enough time studying the evolution of life to have a reasonable approximation to a model of a recursive games theoretic understanding of the probability topologies with respect to the emergence of new levels of complexity across a spectrum between competitive and cooperative environments (and the systemic incentives that promote competitive vs cooperative responses); and one has a reasonable understanding of the constraints that living in real environments impose in being able to rapidly identify and effectively avoid a very large class of existential risks across highly variable contexts; then one can have a reasonable basis upon which to develop a reasonable understanding of what consciousness and self awareness are in human terms, and some reasonable ideas about how such things might effectively map to computational systems more generally.

One starts to think about the abstract classes of functions across spaces that lead to both strong and weak convergence, and the classes of problems that each are effective with. One starts to understand how one can recursively search and sort such spaces (it goes, meta, meta-meta, …..).

One starts to see the deeply recursive role of random search across novel spaces and the infinite realm of the emergence of new infinities of possibility. And of course one builds maps of searched space as one goes, all levels, at varying scales of resolution.

One starts to understand both the necessity and the limitations of the classes of sorted functions, of predicates, of heuristics and priors (each can appear as the other depending on the level of abstraction being modeled in any particular context).

When you spend enough time designing complex computer systems that you begin to see how declarative statements in language can bootstrap new levels of system then it becomes a fascinating recursive journey through spaces for which we have no agreed language to describe, because so few people have gone there, and there are no agreed symbols, and the models cross so many domains.

So in so far as someone can grasp that there does not exist enough matter in the solar system to construct a computer with enough computational power to do a first principles real time quantum mechanical simulation of a human being, then one starts to get a picture of just how complex we, and the world in which we exist, are.

In so far as one has the illusion that such things are even knowable in principle (as distinct from being usefully approximated at some resolution with some probability of utility), then one is a very long way from coming to grips with the fundamental uncertainties present in all quantum computation – and arguably in the substructure of reality itself at all levels.

So it is a very deep problem.

No human being has the computational capacity to accurately model another human being, and we all necessarily subconsciously construct the simple models that we do, of ourselves and others. Such models are always and necessarily wrong, and they are often useful and reliable in common contexts.

So coming to the question directly:

Some humans will do it very much sooner than others (like Eratosthenes thousands of years ago – knowing something so basic about our existence that many today still reject).

How soon is that likely to happen?

A lot of probability variables in that equation, and for me, I’d give it a 90% probability that (provided we avoid world war and avoid global economic crisis {neither of which are certain, and both of which have reasonable probabilities}) at least one person will accurately make the assessment that the AI with which they are interacting is conscious and self aware in a way that is meaningfully correlated to their own experience of consciousness and self awareness within the next 25 years – 50% probability within 12 years.

And those are educated guesses by someone with over 50 years interest in the subject.

Have I been wrong before?

Yes certainly, and I consciously work at being less wrong with time, across as many domains as possible.