Foundations of Logic – “In concise words, tell us how the idea that we cannot know who we are and be who we are at the same time can be overcome.”
What does the best evidence we have seem to indicate is most likely?
What are we?
We seem to be a form of evolved cellular life with a very complex brain.
What sort of numbers are involved in each and every one of us?
Think about all the people on the planet – something over 7 billion. How many is that?
If you were to sit and watch people going past for 8 hours a day 7 days a week, and they were going past at 10 people per second, then it would take you about 70 years to see them all.
We have about 10,000 times that many cells in our bodies (about 10% of them associated with neural networks and information processing at some level).
Inside each cell is about 5 times as many molecules as we have cells in our bodies.
Each cell contains between 30,000 and 50,000 different types of molecules at any instant.
We understand something about the general classes of interactions present, but not much at all that gets very close to the definition of “Truth”, the vast majority of it much more like “contextually useful approximation” (heuristic).
As conscious entities, a highly trained individual can perceive changes at about 180 cycles per second, but most cannot manage much past 12.
Anything happening in a lesser time occurs simultaneously in our perceptual experience (25 frames per second occurs to most people as continuous motion).
So we can have some good heuristics for some levels of our activity, and the vast bulk of us are far more ignorant than we are knowledgeable about who and what we are.
Some of the split brain studies are really worth looking deeply at to see the depths to which we deceive ourselves, and justify things to ourselves.
I have been fascinated by the logic and the reality of evolution and life and the many levels and types of systems instantiated in different life forms for over 50 years. The more I discover, the more I realise that all of what I once took for knowledge is much more like a “useful approximation” to something, rather than any sort of “Truth across all space and time”.
And some of our approximations can be quite useful and accurate – some accurate to 12 decimal places or more – so quite good enough in most contexts.
And it is worth considering that there are more Planck time units in a single “tick” of a cesium atom than there have been ticks of a cesium atom in the age of the universe to date. So sometimes uncertainties are at a very low level, and can be ignored for practical purposes, but that doesn’t mean that they aren’t there.
And other times the uncertainties are right in our faces.
There are entire classes of problems for which the only reasonable solution is an “oracle” (a “black box” producing a random output within a usually survivable class of actions). Throughout most of human history (and even today) most people consider such random outputs as “Truths”, because such things are easier to fit into their conceptual models.
Understanding something of the evolutionary pressures that have produced our tendency to over-simplify complexity at all levels is step one on an infinite journey towards mitigating the worsts of the risks such behaviour causes.
And for any human being, that systemic complexity involves at least 15 levels of complex adaptive cooperative systems (each level with vast populations of instances of complexity and uncertainty). And certainly some statistics give us useful approximations in some contexts, and not so useful in others.
[followed by Pawel asked in part Could we discuss the alternative method of understanding the complexity? Principle of evolution and complexity growth?]
Agree Pawel – at least about evolution,
Evolution seems most probably at root of it all.
So few people have much grasp of the process of evolution.
It starts so deceptively simple:
Something that replicates with a degree of fidelity that is less than unity (and the exact degree of variation in variants is important, and very context sensitive).
Different contexts in the environment, where some replicators will survive better than others.
In early stages, evolution is mostly about survival in random and competitive environments.
Cooperation can only emerge when the threat from elements outside of the population of variants is greater than that within, and there is some strategy that can be instantiated that will mitigate that external risk through cooperation, then cooperation can evolve and stabilize if sufficient strategies can be incorporated to detect and remove cheating strategies.
It seems that RNAs require a level of cooperation to make cells, and another level to make DNA.
Hypercycles are a mathematically interesting possibility, and I am not convinced that it was necessarily (or even probably) the path that early life took.
[followed by Dirk offfered Manfred Eigen – What does a hypercycle do? (55/113)]
[followed by Pawel offerred What is life? – Manfred Eigen]
I agree with Eigen.
And we never deal with reality as it is.
All we, as conscious entities can do, is construct our conscious models of the world based upon the subconsciously created model that is our experiential reality. So – a model of a model.
Mathematics is the best modeling tool we have.
Does that mean that mathematics is reality?
No – doesn’t mean that.
And maths is what we have to give us our best models of reality.
And being conscious of that distinction is critical.
Making mathematical models is essential to understanding.
Confusing them with reality is a mistake.
So I have a minimal model that defines life – replication and metabolism. Thus a virus alone is not life. A virus with a cell is life. Like Eigen says – we have to look at populations – and sometimes the definition of population is fuzzy – as anything that provides sufficient barrier or distinction can function as a boundary for an evolutionary unit over some function of space and time. So individual replicators can be part of multiple different systems simultaneously. Not at all simple!!!
I should have been more specific.
I should have written:
It seems very probable, based upon the evidence sets I have experienced, and the sets of possible interpretive schema I have explored and evaluated on a probabilistic basis, that “we never deal with reality as it is”.
That would have explicitly created the context in which all assumptions are seen as probabilistic localisations across vast landscapes of possible schema.
So the schema that seem most probable to me includes (but is not limited to) to following set of probabilistic properties:
what we perceive seems to be a subconsciously generated predictive model of reality – running some tens or hundreds of milliseconds ahead most of the time;
whatever reality actually is, seems to be fundamentally uncertain and fundamentally unknowable in a large set of different ways; and in many common contexts many of the very large collections
of things we interact with do behave in ways that are very predictable (as collections over time) and very closely approximate causal rule based behaviour;
almost all information we have about reality seems to be mediated by photon exchange, and thus we are always dealing with phase relationships;
our perceptual realities are simpler than actual reality by factors of at least 10^20 in most instances, and often much more than that (so very poor sketches, and the best that we can do in the context).
That is a very different context from classical “knowledge”.