[ 11/1/21 ]
What do you think you mean by the term intelligence?
We now have vast evidence sets suggesting that human experience can only ever be of a subconsciously assembled model of reality, never reality itself. The the notion of fact is already strongly suspect – and can only ever really be something more like “contextually reliable approximation” to whatever “objective reality” might actually be.
When one looks deeply at the science of our brains, and at the science and logic of evolution, then the picture that emerges is deeply complex.
We seem to be deeply tuned by evolution to create “contextually survivable approximations using least possible energy and time”.
When you look at the patterns present in human brains, over 90% of the activity is internally generated, with no direct relationship to any external stimulus. So the idea that our brains are created by experience has to be replaced by one in which our experience of reality is, for the most part, correlated to already existing internal systems and patterns.
The assumptions of Turing and Church on the sets of computable functions do not seem to be closely related to the context of our existence or the evolutionary pressures that seem to have produced us.
One needs to spend some time in the theory of search with fully loaded processors to appreciate the power and necessity of random search (across all possible dimensions in all possible “spaces”).
When one has spent enough time searching dimensions of strategic spaces for which there are no agreed referents, then communication of anything one finds “interesting” can be extremely difficult.
All human beings seem to have roughly the same computational capacity.
What that capacity gets applied to seems able to be strongly influenced by sets of genetic, environmental and chance factors.
Most people seem to have strong valances for social agreement, and some of us do not.
The further and longer one strays from social agreement, the more difficult communication becomes. Such ventures tend to be quite lonely (of logical necessity).
The space of possible risk outside of socially agreed “spaces” seems to be sufficiently large and dangerous that societies require some significant fraction of outliers doing “random search” in order to have a reasonable probability of long term survival.
In terms of long term survival, if there is one simple message that is clear from all of my explorations of intelligence and risk, it is that long term survival of complex systems capable of generating or comprehending sets of symbols such as this is dependent upon sustaining levels of cooperation. Competition tends to drive systems to some set of minima on the available “complexity landscape”.
So to me, the common definitions of intelligence that rely on notions like facts, reason and logic seem overly simplistic, when it seems that the details of most of reality are complex and contain multiple sets of fundamental uncertainties that mean that all any of us ever have is some sort of “contextually useful approximation”, and often contexts change without us noticing.
The modern tendency to over simplify complexity is dangerous at multiple levels, and the whole “Trump” phenomenon seems to be one of the lesser expressions of such dangers.