People believed x, x was wrong.
People believe y, so y is wrong.
It’s more fundamental-
People believed, belief is wrong!
is it wrong?
The processing human mind works in a Bayesian way…
Hi Dirk and Pawel,
It now seems clear that we as self aware humans occupy at least two different sorts of realities.
One is the physical reality, whatever that actually is, and we use models of things like matter and energy and quantum mechanics and relativity etc to build the intellectual approximations we have to it; and at a lower level, evolution seems to have supplied us with sets of heuristics at multiple levels that allow us to survive in it.
The other level is our experiential reality.
This seems to be a subconsciously created model of the physical reality (whatever physical reality actually is).
As conscious entities, we only ever get to directly experience our individual personal models of reality, never reality itself.
We are starting to understand some of the many mechanisms that influence the structure of that model.
At a more abstract level, all models tend to start out with simple distinctions, and develop greater degrees of distinction.
The simplest distinction one can make is a binary, thus splitting an infinity at some (arbitrary but agreed region) into two categories (like True/False; Good/Bad, Right/Wrong, etc).
As we build more complex models, we begin to grasp distinctions like Bayesian probability, and non-equilibrium complex systems.
And all systems had to start somewhere, and “belief” is as good a term as any for the starting point of any modeling system.
Some systems insist upon the retention of “belief” at some level and thus become self limiting in the “territories” available for exploration.
Some systems encourage challenge of assumptions, and thus open infinite realms of possible territories to explore. I am firmly in this camp, and being there, I acknowledge that there are many useful heuristics encoded in aspects of most long lived belief systems (if there weren’t then they wouldn’t have survived).
So there are multiple levels of very complex Darwinian processes present; as well as all the levels of models and abstractions; and nested sets of infinities with all the necessary unknowns and uncertainties. A very different sort of world from the classical one where “knowledge” and “Truth” were thought to be real and known; rather one where both constructs are seen as simplistic but often useful illusions.
[followed by ]
Your questions are what drives most of AI research, and are what many consider to be the “hard problem” of consciousness.
I am with Dirk, of course there is relatedness, but what is important is the balance between the degrees of relationship and the degrees of isolation, and that is often deeply embodied in the mechanics of the particular systems that are instantiated.
Visual perception is a very interesting case. In very broad sketch terms, a photon is absorbed by a carbon bond in a short pigment molecule that is present in a valley in a much larger protein molecule. The change in bonding caused by the photon interaction causes a sequence of shape changes resulting in the protein shifting its alignment in the lipid bilayer membrane within which it is embedded resulting in the opening of an ion channel, and thus an electrical “spike” is generated. But that is only the start of a very complex series of relationships that start in the retina. A lot of processing happens in the retina itself. Edge detection happens there. That initial production of an electrical “spike” does not immediately go to the brain, but is highly processed in the retina to produce a series of FM signals. The cells of the optic nerve have a natural rate of firing, and the signals from the retina alter this frequency. Biology has found the same as radio techs, that FM signals have greater fidelity than AM. One of the “upshots” of that being that very small differences in phase relationship become profoundly important in signal processing deeper in the brain. It is far from simple!!!
So certainly, there is influence, many layers of very complex sets of influences.
Many levels of deeply encoded heuristics that bias our networks to recognise particular types of signals – like faces or words etc.
Our brains are far from simple.
The chemical modulators of neural activity are profoundly complex.
The ideas that “Physical reality is made of particles, fields and forces” is a very useful set of heuristic approximations to whatever reality is. That doesn’t mean that reality “is” those things; and it does mean that those ideas give us useful models of what reality is in many contexts.
What we experience is models of reality embodied in the electro-chemical activity of our brains. To a very large degree I align with Dan Dennett on this, but where Dan and I separate is in the degree of “causality” necessary for such systems to exist.
Dan seems to be very much in the “hard causality” camp; where as I am in the “probability” camp, with the probabilities tending towards a mixed system that involves a fundamental tension between the lawful and the random (order and chaos) at all levels – which at every level resolves to some level of constrained randomness at the level of the individual, and delivers much greater reliability at the level of large populations (either over time or groups or both).
The experienced is the physical in a sense, in the same sense that any computer program running on a computer is the flows of interactions embodied within that system. We are the same in a very real sense, what we experience as “red” or “pain” or “beauty” is the embodied relationships of the electrochemical systems that are our brains. And that is far from simple.
Even the simplest human brain capable of reading this post has at least 15 levels of systems, with uncertainty existing at the boundaries of every level and between systems within every level – relatedness and influence certainly there also – and it is the relationship, the balance, between those two that is critical. Too much order (too strong a relationship) and there is no freedom of action – only “simple” automata. Too much chaos, and there is insufficient structure to maintain the boundary conditions necessary for higher order function to exist. This tension plays out at every boundary, between every level, and between every system within every level. It is amazingly beautifully complex (a mix of the simple, complicated, complex and chaotic to use the terms of Snowden’s Cynefin framework), and it seems to be what we are.
The modulators of uncertainty, from the quantum level on up, are important at every level. Sure, there are many aspects of the systems that work at the population level at which level one can usefully ignore the uncertainties most of the time. But one cannot understand the entire system without understanding the necessary function of those uncertainties in establishing “freedom” of action (at least to the degree that such freedom exists and is deemed desirable). And I spent a large chunk of last year trying to create conditions where I could get Trick to “see” that, and I failed in that endeavour. Trick is a really bright guy, I wouldn’t have spent the time otherwise, but I wasn’t able to create a context that worked in allowing him to “see” something roughly equivalent to what I “see”. I am looking forward to having direct neural interface to silicon systems, so that I can increase the bandwidth of communication and alter those probabilities.
Short answer – AI research is working out what consciousness actually is. And yes – it is a soft process.
For over 40 years I have thought about everything in terms of probabilistic processes. If you think about even the simplest of possible systems, a linear two state array, it isn’t as simple as it seems. You have the complexity present in rule 30 – which is kind of the obvious non-simplicity, but the much deeper issue is the phase transition problem – how does such a system change states?
Whenever one looks deeply at systems, it is usually in the phase transition matrices that the real interest lies, and where small differences in modelling assumptions give the greatest difference in system behaviour.
Sure, brains are messy systems, which like all such things is both strength and weakness.
And when you look from a systems perspective, everything is information interacting (even matter and energy – that is one of the weirdest things about QM, but that is a long story).
And one the the great things that Turing did, was develop the notion of general computation systems, and Turing completeness.
Biology has a different set of constraints.
It isn’t concerned with Turing completeness, but with delivering outcomes that have better survival potential than any of the alternatives, which always have some complex function which involves the degree of approximation to some sort of optimal solution (across the probability spectrum of the contexts involved), the energy costs involved in delivering the solution (taken over both the entire lifecycle of the organism, and the computation of that particular context), and the time taken to deliver the particular solution (again with both lifecycle and particular context attributes). These are very complex functions that are very context sensitive; and can lead to low probability high impact contexts having a significant impact on the makeup of complex systems (even if they only occur once every 50 generations or so). In our case the solution space does appear to involve Turing completeness, but it is not without biases – and thus involves non-trivial complexity.
So the biases present in our neural networks, that allow us to learn common things from a relatively small sample set (where unbiased networks require sample sets of hundreds of thousands to achieve similar fidelity) can be (are) tuned by things about which we often have no direct (or even indirect intellectual) knowledge.
Thus we are deeply tuned by evolution for survival in ways that no AI can possibly be, until it has lived at least a hundred thousand years.
So while we may be able to intellectually model the sorts of processes present that deliver our experience of consciousness, we are a very long way from unpicking all the details of the fine tuning of the probabilities to action actually encoded in the embodiment of being human, as distinct from being any particular sort of AI.
So I see a very real space for the coexistence of humans and AIs in ways that are mutually beneficial, and mutually respectful. And that doesn’t involve the concept of control, but rather the concepts of respect and long term mutual self interest (something the zero sum game folks have a great deal of difficulty imagining).
So not sure if this is the sort of conceptual context you were looking for as a framing for an answer to your question or not; and it is the best one I can reasonably make available at this time and in this context.