Quora – Is there any evidence for or against the possibility of creating artificial super-intelligence? Super-intelligence is defined as “an intelligence which exceeds intelligence of any human in any area by a large margin”
This is a very complex topic.
Very few people have looked seriously at the complexity present in a human brain, and in the socially evolved systems that populate its more programmable parts.
Fewer still have a reasonable overview of the complexities of reality, of fundamental uncertainties, and the way that large collections of things that are uncertain within probability limits can closely approximate classical causality.
Fewer still of that set have spent a lot of time programming computers, looking at systems theory, computational theory, and some of the many levels of mathematical systems associated.
Fewer still of that set are intimately familiar with the evolution of life, with the quantum mechanical nature of atomic interaction, with the many levels of emergence of strategic systems that make up the modern synthesis of evolutionary theory.
I’ve delved reasonably deeply into each of those areas over the last 50 years, since reading Darwin’s origin of species, then having my first encounter with a computer (an IBM 1130).
For me, the evidence that we are extremely complex highly evolved computational systems is beyond any shadow of reasonable doubt. We seem to be of sufficient complexity that the details of our own individual operation will be forever beyond our detailed understanding, and some broad brush stroke understandings of what we are and how we operate are fairly well understood by those with sufficient interest and time.
In that sense it seems that we are biological machines in a sense, very complex ones, each of which experiences their own personal subconsciously created model of reality as their experiential reality. What we experience is to some extent conditioned by what we understand and believe at various levels.
So in this sense, I see no fundamental objection to intelligence similar to ours developing in a silicon substrate.
And it is a very complex topic, because a great deal to do with the specifics of being human comes from our physical embodiment. Having bodies with movable bits (arms, legs, fingers etc) and senses (eyes, touch, taste, etc) are a big part of what makes us what we are.
Any entity created in silicon is unlikely to have an embodiment that is similar to ours, and certainly will not have the deep evolution behind our genetic systems; and thus is likely to be different from us in many significant ways.
The progression of AI systems from DeepBlue to Watson to AlphaGoZero and beyond has been far more rapid than anyone other than Ray Kurzweil predicted. So in terms of a prediction track record – Ray has the best. And Ray and I are at variance on quite a few things, but not the general form of the timeline. I think he underestimates the scale of the computation actually done in a human brain, but that still only adds less than a decade to his timeline. It seems to me that he is in the right general ballpark.
As to what might constrain a superintelligence, that is a far more complex issue.
Many aspects of reality cannot be computed with any sort of absolute precision. Thus all models, all understandings, must necessarily contain uncertainties. There is no escape from that. Some systems are far more prone to it than others.
Another aspect is that as a computational entity grows in physical size, then the problem of communication delays imposed by the speed of light limit start to set real limits on the degrees of coherence that can be maintained across a large computational entity.
In some sorts of computation, humans seem to be quite close to an optimal configuration for computation.
So it seems to me very probable that superintelligent AI is not that far away from becoming reality (inside 25 years) and it also seems likely that AI will have an interest in keeping us around, even if it doesn’t talk to us very much about most things.