Foundations of Logic [Continued]

Foundations of Logic continued

Hi Andrei Mirovan

Consider this view for a moment.
Prior to the formal discovery of the most simple instance of possible logics (what we may call classical logic) the terms know, and knowledge referred to a practical capacity to identify relationships that appeared to have some level of regularity or consistency in experience.

Then along came a group of philosophers, and decided to define this everyday term to mean something within the world of logic they had discovered, based on the (now clearly flawed) notion (demonstrable beyond any trace of reasonable doubt) that all of reality and all understanding must be based on this “classical logic”.

Now there remain sets of people who from time to time make claims about the supremacy of the structures of classical logic in humans making sense of the world, and in the behaviour of this matrix that we find ourselves embedded in that we call reality. I make the strong claim that, while classical logic is certainly applicable to many aspects of reality, and it does give us many useful tools, the claim that it is at root of both reality and our understanding of reality has been falsified by many different realms of experiment, repeated many times.

It seems clearly that all of what passes for knowledge is based in probabilities at many different levels.
Any attempt by philosophers to claim that the true meaning of the terms know, and knowledge belong to the realm of classical logic is a hubristic claim based in ignorance.
The terms belong to the common usage, and as such must include the probabilistic and utilitarian aspects.

One can of course redefine any term to mean anything within any narrow domains (something lawyers and judges specialise in, if someone pays them enough money); but doing so does not alter the common and fundamental nature of these terms.

So while I think we agree that knowledge of reality in the strict sense of knowledge defined in classical logic is not available to humanity with 100% confidence; I think we may be poles apart in terms of what that means for humanity generally.

To mean, being really clear about that, simply removes much of the false certainty hubristically claimed by many throughout history who’s claim to truth can be much more accurately described as claims to power and prestige.

Truth and knowledge belong in the common domain, and they need to be there in the softer probabilistic sense.
Claims for the harder classical sense need to be dismissed as clearly refuted (in terms of their absolute applicability to reality in all cases, as distinct from the sense of their being generally useful in most cases).

[followed by]

Hi Andrei,

1 – My understanding of the probable evolution of the notion of truth in language is conjectural based on my 50 years of interest in all aspect of life and evolution and behaviour and psychology and linguistic evolution and AI and the evolution of consciousness. I am very confident of the general shape of the schema, if not so confident of any of the specifics.

2 Yes certainly the relaxed version is open to all manner of issues.
We each need to be conscious of all of the likely issues, and mitigate them to levels that appear appropriate (Yudkowski’s Rationality from AI to Zombies is a reasonable catalog of many of those errors).
The great power of the approach is its contextual utility – things that work in practice in times short enough to be useful in survival contexts.
The great danger is transposition to contexts that appear similar but differ in critical ways that invalidate the heuristic in use.
Its called life.

3 Truth claims appear to me to be applicable to any level of representation of any thing or relationship (at any level of relationship or abstraction – conscious or subconscious). Thus one can make truth claims about proposed facts or relationships.

One of the hardest ideas for many people to get is that our experiential reality appears to be a subconsciously created model of the reality beyond our senses, and is subject to many different levels of heuristic embodied in our sensory and neural apparatus.
Thus any recalled experience already contains at least two levels of implicit truth claim – one in respect of the experience and the memory of that experience, and one in respect of the correspondence between reality and the subconscious model of reality that we experienced (and each of those may have subcomponents).

These two claims are quite independent of any higher order claims we might make about higher conscious level abstractions we might develop.

So it is very easy for a seemingly simple statement to already embody 3 or more levels of truth claim (all probability based).

The way deep neural networks learn through reinforcement learning is quite remarkable – and beyond this post or forum.

[followed by]

It seems that reality is whatever it is – that I accept.
It also seems that we’re part of that in an existential sense.

And within that, in the experiential sense, we get to experience our subconsciously created models, and never reality itself.

So yes, certainly, it does appear to be the case that no map is ever the territory, and that all truth claims of reality are necessarily probabilistic in this sense, however tight might be the correlation in reality.

As one of my hacker buddies from the 80s used to say – its hard to beat the refresh rate on reality.

[followed by]

Hi John,

I say the minimum level of abstraction for words is two.
The subconscious abstraction of model from reality (often a very complex multi leveled set of abstractions in itself – but for simplicity’s sake let’s just call it 1, because in some instances it might be).
The further layer of association of symbol to model (the level of language – quite distinct from the level of the model that is our experiential reality – and again this would often involve multiple levels of abstraction, but for simplicity’s sake – let’s say 1).

And I guess it does very much depends on what one defines as abstraction (as a programmer I count any instance of a hierarchy of classes).

[followed by on another and related thread about negative certatudes]

From my perspective – it is easy to dive down any rabbit hole of uncertainty essentially forever, expanding realms. We each of us seem to be sufficiently complex that we can explore aspects of our selves indefinitely.

Evolution seems to have given us a lot of heuristics, that allow us to build our simplistic models of reality.

Logic seems to work well with those simple models.

Most people haven’t really gotten the degree to which uncertainty and simplification invades our experiential reality, and the conscious conceptual models we make of that.

That we understand as much as we do is little short of miraculous.

Logic is a tool, a useful tool.

And evolution seems to work with actions in reality.
Our conceptual understanding is important is as much as it impacts how we act in reality.

It is actions that matter, actions within timeframes that work in reality.
We cannot do that and deal with the complexities that are present.

[followed by]

Hi Andrei,

As I have stated many times, I have very high degrees of confidence in some things, and nothing beyond the possibility of doubt, and many things I rarely operationally doubt.

And I find the abstract realms of logic, systems and mathematics interesting and useful, but not necessarily accurate models of reality, more like useful approximations in some sets of contexts.

And they are the best modelling tools we have, and we’d be foolish not to use them.
And I simply urge caution at the boundaries – don’t be too confident that any “simple” or “elegant” model necessarily captures all the essential elements of complexity that are actually present.

[followed by]

If you want me to say that any model of reality accurately – 100% captures all of the complexity present – I think that is highly improbable, and will likely always remain highly improbable.

So I really don’t know how to say it any more clearly than that.

In a very real sense, the very notion of “objective truth” – taken to the nth degree, has a “Santa Claus” flavour to it – an entirely mythical simplification of something.

Useful heuristics for use in dealing with reality – those I have large collections of.

[followed by]

Andrei Mirovan
Hi Andrei,

No, actually it is very much a matter of science and computation and evolution of awareness. And very fundamental to each of them.

If the uncertainty principle tells us that there are actually limits of accuracy beyond which we may not go, then every model we make must contain at least that degree of uncertainty, and in practice a whole lot more due to many levels of measurement errors in all parameters measured.

A knowledge of systems space and algorithm space and computational systems more generally that are either unpredictable, or not predictable by any method faster than letting them do what they do; adds another layer of uncertainty. Things like rule 30, and fractals, and chaos, and irrational numbers.

So there seems to be a demand from reality for many levels of fundamental uncertainty and un-knowability.

Thus the very idea of “Objective knowledge” in the hard formulation seems to have been falsified beyond any shadow of reasonable doubt.

Thus we are left only with the softer form, the useful approximations that are useful to us within certain contexts, or at certain scales, like “flat earth” working for a carpenter, or “round earth” working for a sailor, or “relativistic space-time” being good enough for a GPS system designer. Each approximation useful in context.

And for me, that is all reality seems to allow, ever.
Hard “objective knowledge” seems to be forever forbidden – like reaching the end of a rainbow, thus giving the very concept a “Santa Claus” like quality.

It really does seem to be something quite fundamental, at any level one approaches it, even Goedel found something analogous in the most abstract of realms.

[followed by]

Hi Andrei,

I’m all about soft objective knowledge – firmly based in probabilities – with fundamental uncertainties – that is me (at least to a useful approximation 😉 ).

[followed by]

Andrei Mirovan
Hi Andrei,

Certainly – agree with Popper about the discovery aspect of knowledge, which aligns with the Buddhist idea of a path worth travelling growing longer by at least twice the distance a master travels it.

Wolfram’s explorations of theorem space seem to reveal similar trends.

I suspect that is a big part of why it took evolution some 4 billion years to produce us.

We are not very probable – yet here we are.

[followed by]

Hi Andrei and Pranav,

I’m not sure if this will achieve communication, nothing else has.

In terms of reality, the very notion of truth does not seem to apply.
It seems that at the finest scale, uncertainty is required.
So the idea that there must be “TRUTH” in order to have a probability doesn’t actually hold.
Probabilities may be themselves derived by probability distributions.

So one can build a probabilistic understanding about a reality that involves probabilities.

[followed by]

Pranav Parijat
Hi Pranav,

I entirely agree with you that the concept of truth is parsimonious, a “least cost” option in evolutionary circumstances, and therefore likely to be strongly selected for, and therefore common, but that fact doesn’t make it accurate 100%. You have just beautifully illustrated my argument in a sense.

My statement that //In terms of reality the very notion of truth does not seem to apply.// does not say anything about reality with 100% confidence. It does say something about the nature of our understanding of reality, which is a different domain in a sense.

One can of course construct meta statements about the the truth of probability statements, but none of them will tell you anything specific about reality with 100% accuracy, which was the original definition of truth we started out talking about. So in making such a claim you have changed domains – and are no longer talking about reality but about conceptual understandings of reality.

Yes – certainly, the idea of truth is much easier than probability.
It is much easier to believe that one can know reality, than to do the years of study required to build an understanding of quantum uncertainty, and in the everyday realm, of things the size that the unaided human eye can see, and the human consciousness can recognise at native speeds (> 10^-2s), then such certitude is a useful approximation to something; it works in practice; but does that in any way make it *TRUE*?

[followed by]

Hi Andrei,

I go one step further, in stating that on the basis of the conclusion that Quantum uncertainty seems probable, and on the basis that reality seems to use irrational numbers, then it seems highly improbable that we can know anything about reality with *absolute* certainty.

To very useful approximations – yep – certainly that, but absolute – no – not that, that seems very improbable.

[followed by]

Hi Andrei,

I think you misunderstand what Popper was saying, or Popper was in error.
If you can give me a specific reference in Popper’s writing on this topic, I may be able to give a more confident answer.

In the general terms outlined, I can agree with what would seem to be a reasonable claim for Popper to make, that the ordinary mode of human “soft” assertions based on simple assumption sets, tends to deliver less probable knowledge than a set of probabilities derived from a more rigorous and recursive exploration into the many levels of errors and uncertainties that seem to actually be present, at the level of reality, at the levels of our sensing of reality, at the level of the model our subconscious creates from that, at the level of our conscious experience of that model, and at the more abstract levels of our interpretation of that conscious level experience.
Without extensive investigations into all levels of that structure, into the systems and uncertainties present at each level, all sorts of biases show up.
In that sense, I can agree with Popper, if that is actually what Popper meant (which I am uncertain about without explicit reference).

How one gets to a provisional understanding of confidence is important.
Many of the initial levels seem to be instantiated by systemic constructs delivered by genetic evolution.
Many more seem to be instantiated by constructs delivered by cultural (mimetic) evolution, and we are each (at least theoretically) capable of recursively instantiating levels beyond those (though few do).

How one approaches instantiating levels of systems within oneself, the level of distinction one builds of systems both within and without, how one critiques and evaluates, the levels of systems one adds to the mix and the levels of confidence one instantiates, can form profoundly complex systems, with multidimensional probability landscapes.

So yes – there is a very real sense in which each of us, as self aware individuals, at least to the degree of self awareness that we have, must take a level of ownership and responsibility for the levels of probability present. To do anything less is dangerous at many levels, including but not limited to the sort of levels that Aleister Crowley played with.
It is a seriously – non-trivial problem space; many agents, many games, many strategies and levels of strategies, many levels of game space.

The probabilities are those available to me, from my investigations, my experiences, my intellectual efforts (which in many cases are based upon the efforts and experiences of others, across deep time and deep complexity).

So yes – there is an inescapable aspect of “seem” or “uncertainty, instantiated at many levels; and if free will has any meaning it would seem to involve some sort of ownership and choice in the midst of that uncertainty – something essentially personal in a deep sense.

[followed by]

Hi Pranav,
I know what you wrote made sense to you, you would not have written it otherwise, and it appears so far from my understanding that I am not sure how to bridge that gap in any reasonable time.

I am clear that science has demonstrated, beyond any shadow of reasonable doubt, that “the search for an absolute or for perfection” cannot succeed in any absolute sense. We seem to exist in a reality that has both fundamental uncertainty (in terms of quantum uncertainty) and to contain classes of systems that are not even theoretically predictable.

In terms of point 2, it is several decades since I let emotional systems entirely determine anything, and I do take notice of their input as an important aspect of my assessment of anything.

Point 3. In my understanding I did not say that reality is unknowable. What I tried to quite explicitly say is that there are limits to the degree to which we can define anything in space and time, and those limits impose necessary boundaries in the correspondence between reality and understanding. So we can be very confident within certain limits, but beyond those limits the confidence degrades to zero. So some things are available with very high degrees of confidence, and others not. Understanding those limits and why they are there is a form of knowledge, a form of useful patterns, and it is a very different form from the classical notion of any sort of absolute correspondence. In that sense of an absolute correspondence, it seems to be unavailable, but can be very closely approximated in some contexts (not so closely in others).

Point 4 – Language does not require formal truth, all that is required is sufficiently accurate correspondence – something “near enough to be useful”.
I understand that for many people, the concept of “truth” defines the experiential reality. I am not in that set of people.
I have been operating on the basis of probabilities for over 50 years.

Point 5 – I understand that many people do in fact live in experiential worlds that are truth based. I am not one of them. Altering the definition of truth as you suggest would destroy the entire schema I was attempting to construct. And in practice what you suggest has a certain utility and does seem to be what most people do.

[followed by]

Hi Pranav Parijat,

All I have is probabilities, some of them are close enough to one to be one in practice in most contexts, but nothing is beyond question if the context seems to require it.

For me, that is kind of definitional.
If anyone accepted a *TRUTH* 100%, then it becomes by definition something that cannot be questioned (the 100% takes it out of the questionable category).
Thus I find the notion of 100% *TRUTH* dangerous, as it closes off possibilities.

And certainly, there are many contexts where one needs to close down low probability “possibility spaces” because of urgency and necessity; and I find that is best done on a probability basis rather than by using the notion of absolute truth.

Thus I can see the evolutionary utility of “absolute truth” as a notion in cultures, because of the simplicity it delivers in a profoundly complex and dangerous world. But when one has sufficient security and tools to start exploring beyond culture, then one needs to move from certainty to uncertainty, to levels of confidence. This is what science actually requires scientists to do, even if many don’t understand what they are doing, and are simply going through the motions of doing the probability calculations as what needs to be done to get a paper published (I know tenured professors who are like that).

I often find myself in a profoundly uncertain space, as I hear words from others and rather than localising to a single interpretation, my mind delivers clouds of possible interpretations with approximately equal probabilities; whereas I can see from the speaker’s body language that they have only one meaning available them – I just have no real idea what it might be; or perhaps more correctly, I can see what that is but it is falsified beyond any reasonable doubt in my world, and I have no useful translation matrix to deliver an interpretation close enough to something reasonable that I can work with it. Often there simply is no usefully short way to break through the *TRUTHS* that are present to expose the possibilities beyond them, because breaking such *TRUTHS* is always a profoundly emotionally unsettling experience, as it requires reconstruction of entire “landscapes”, and that takes time.

For some the experience can go over the boundary of acceptable risk and place them in a state of profound anxiety from which there is no simple recovery path. Thus I try and avoid pushing anyone over that boundary, and just leave hints that people can follow as and when they feel comfortable. I suffer from vertigo, and know how useless it is, when climbing mountains, to have someone who has never had it shouting at me to hurry up. Doesn’t work. I’m best just left alone to sort it out in my own time, as I manage to calm down the overexcited regions of my brain and restore some sort of equilibrium to the system as a whole, and I find that is best done with all motor function suspended.

[followed by]

Hi Pranav Parijat,

When you are willing, just try out, without necessarily believing it, that what I wrote might work for me, and consider what sort of world that might be, one without any *Truth*, only useful approximations, and contextually useful tools, and things of that kind. Nothing in relation to the world pointed to by my experiential reality that is solid. All of it subject to question, to uncertainty, to boundaries of probability within sets of contexts (where even the identification of likely context is a probabilistic determination).

Perhaps the classical notion of *Truth* has such a hold on your mind that it may not be challenged, even for an instant.

For me the idea of *TRUTH* in respect of reality – as in 100% correlation between my understanding of any aspect of reality and reality itself, has the same sort of probability of existence as Santa Claus. I have had that understanding for about 50 years. I have explored a lot of territories (physical, logical, strategic, intellectual) using that paradigm over that time.

I get that the sorts of relationships I see are not often seen, and that few people experience existence as I do, and that communication on subjects like this (as in a conceptual system present in one mind being duplicated in another mind), if it happens at all, is rare. And I do sometimes feel the need to try, to the best of my limited abilities.

My objective is not *TRUTH*.

My objectives are survival and freedom – mine and everyone else’s.
And the very notion of *TRUTH* seems to me (very clearly) to be a threat to both of those values I hold most dear.

[followed by]

Hi Andrei,

I think Page 28 of Popper points the way towards much of what I am saying, without explicitly providing clear reference to the evolutionary and systemic mechanisms that to me are clearly present, and without going so far as to question the notion of truth itself.

On Pages 228-9 he makes the distinction between using probability to determine truth and fallibilists, who determine error.
I have a certain sympathy for his position, in that one can gain far greater confidence that something is false, than one can gain that it is correct. And there remain uncertainties at many levels of distinction, measurement, understanding and right on down to quantum uncertainty.

What is important to me are two values – survival and freedom, my own and everyone else’s (and everyone else gets included because they are critical to my own, considered in the longest possible terms – thousands, billions of years).

Even the laws of thermodynamics can only have “not yet falsified” levels confidence, and the fact that they have withstood billions of tests imbues them with a level of confidence that would require strong evidence to warrant any challenge.

In terms of survival and freedom, I am looking for what works, in practice – not any sort of mythic purity.

I have some idea of how complex and messy the world can be, we can be.
Not only is there Heisenberg uncertainty, but all the uncertainty of chaotic, fractal, and complex systems, and the irrational numbers and human neural neworks and biochemistry etc. In dealing with irrational numbers, computation cannot deliver absolute certainty, only successively better approximations. Any irrational number may be computed indefinitely – that is the definition. How often do you see Pi and e in equations – to name just a couple.

So in aiming to survive, and to maximise whatever approximations to liberty are available to me, then I must select heuristics that work effectively in the contexts I find myself in.
And I find myself in profoundly complex contexts – cosmological, geological, biological, economic, cultural, philosophic, conceptual, computational, strategic – games upon games, games within games. I have identified some 20 levels of complex systems in action that seem to be currently relevant to our survival.

There are several profound fronts that must be approached simultaneously.
Awareness of the traps of *TRUTH* is one.
Awareness of the traps of nihilism and relativism is another.
Awareness of our individual creativity and the responsibility that comes with that.

The failure to appreciate the role of cooperation in evolution, with the resulting myopic focus on markets and competition.

All fallibilist assertions in a sense, yet all also pointing deeply to a different sort of reality.

It is not that I see probability pointing to truth.

What I see is probability pointing to a reality that is fundamentally probabilistic – which is something profoundly more complex than simply the idea of uncertainty in relation to truth, but rather inverts the entire premise, and points to confidence in falsification of the very idea of truth in any sort of absolute sense in respect of reality, leaving only the heuristic sense, of what seems to work in particular contexts.

It seems in a very real sense that evolution has constructed us in this fashion.
We are each the instantiation of some variation on themes of what has worked in history, at least well enough to survive to date.

What right, what hubris, have we to expect anything more?

Like Popper I am interested in relevant heuristics – things that work, in practice, in useful times.

In a sense you can say that I am making the strong claim that the Tarksian sense of True is not applicable to reality, however useful it is in the design of formal languages and models that deliver many of our best approximations to understanding whatever reality is.
And I get that is an idea that will be difficult for many to get any grasp of for long.

In a sense, one could say that I promote a meta truth, which goes something like, it seems that reality does not allow itself to be defined absolutely, and requires some sort of balance between order and chaos at all levels (both physical and abstract), which seems to demand of us successive levels of both humility and acceptance of diversity in ways that may not be at all comfortable. Some approximation to this approach seems to be required to deliver reasonable probabilities of both survival and freedom.

While I understand and appreciate the distinction that Popper builds between verisimilitude and probability, I use probability across all domains all distinctions, such that the very notion of truth becomes dispersed and unattainable past certain limits, and outside certain contexts, and no amount of computational ability can escape those boundaries.

I agree with the difference Popper distinguishes (within his assumption sets), and add another, that fundamentally undermine some of his implicit assumptions.

Probability, confidence, is an aspect of every dimension of understanding, in my world (including this one – I’m quite confident of it).

[followed by]

Andrei Mirovan
Hi Andrei,

Point 1 – Evolutionary approach – not sure that many of the aspects I use were available at the time for Popper, and he was in the general systemic space, so yes – some sort of approximation.

Point 2 – Why is it easier to prove something false than to prove it true? Simple. If there is a population that one can sample of size x, that contains a subset of size y, where x is very large, and y is very much smaller than x, finding a single y that falsifies some general conjecture about x does in fact falsify the conjecture, whereas one would have to sample the entire population of x to disprove the conjecture that there does exist a population of y that disproves the conjecture – as y might only have one individual, and it might be the last one you sample.

Point 3 – I mean Freedom in both senses, freedom of will and freedom of action, in as much as either may be approximated, acknowledging the impact of “influence” (in the probabilistic sense, as distinct from strict cause in the binary sense) up and down the many levels of our being. I do not believe it is possible to be entirely free of influence, and I do believe it is possible to be influential to a very high degree, to the degree that one identifies and allows for all the levels and types of influence present – or perhaps more correctly to the degree that one approximates such allowances.

Point 4 – verisimilitude – what is it closer to? That is the question.
Here is where Popper and I seem to part ways.
The idea of Truth that Popper seems to hold in this context is some perfect correlation to the state of reality.
The evidence we seem to have from experiment seems to indicate that at the quantum level reality does not allow such knowledge, ever. It seems to actively prohibit it.
Thus the idea of being closer to something I agree with. But the something is not any sort of absolute knowledge of the state of reality, but the best possible approximation that reality allows. It has a fundamental fuzziness to it. It doesn’t fit well with the classical notion of *Truth* as being some sort of singular perfect “thing”.
What reality seems to deliver to us is fundamental uncertainty, within certain limits. When we aggregate those units over time and space they populate the probability distributions in ways that give us great confidence about their behaviour over those larger aggregates. The smallest unit of time a human can perceive is some 10^40 of those fundamental units – so on the sorts of scales that humans can natively perceive, many things can be very predictable indeed – those probability distributions are effectively solids at that time scale. So computers and engineering work in practice (at least to the degrees that they do). Hence to our native perceptions of time and space, classical notions of *Truth* are a very good approximation to something – but not so much at the finer scales.

I agree with Popper in the sense that the idea of *Truth* provided a useful tool, a useful approximation to something, in the conditions of our past.

But the conditions of our present are changing on double exponentials.

Many of the heuristics that worked in our past no longer work as we cross critical thresholds.

The idea of *Truth* is one that is failing in critical areas.
The idea of markets as useful measures of value is another, one that fails in the presence of fully automated production.

We are in times of profound change, in every dimension; physical and systemic (intellectual).

Some of the ideas that served our ancestors well now pose existential level risk to us – *Truth* and *market values* are two such.

5 A game may be generally thought of as any level of interaction between agents within certain rule sets that may involve differential utility or reward in some dimension. The utilities of strategies are typically some complex function of the rule sets present. The sets of possible strategies appear to be infinite, though typically one only encounters low order instances.

Point 6:
a) probabilities. I have stated repeatedly, that all aspects of my understanding are probability based – everything. Any aspect you can conceive of (and perhaps some you haven’t yet conceived of the possibility of). If you can conceive of any level of measure of anything, any metric at all, then in my understanding it will have a probabilistic instantiation (confidence limits on all dimensions of measure).

b) the North Star analogy of *Truth*. I actually like the north star analogy of truth. Classically people thought of Polaris as a fixed star, a constant pointer to north. Now we know that it is moving, that the light we see from it today, tells us where it was 433 years ago, not where it is now. That rather than being a single point, it is a binary star system, with one star over 5 times the mass of our sun, and the other half as big again as our sun. But to the naked eye, it is just a point of light.
So the idea that Polaris is any sort of fixed ideal, is entirely mythic, unreal, and it is a useful approximation to something as a practical heuristic.
In exactly that sense I can agree that the classical notion of *Truth* was useful approximation to something for practical purposes of the time, but that “the times they are a changing” – exponentially, and we need to update our understandings accordingly if we are to have useful heuristics at the boundary conditions we are now exploring.
Just like thinking of the Pole Star as fixed was a useful tool, so too the notion of *Truth* was a useful tool, but now we have great confidence that it cannot be that simple – it is actually far more complex, if one is pushing that boundary.
If all you want to do is go sailing – fine – it works as a useful heuristic (an approximation to something that is useful in practice in that context).
If however, you are interested in exploring the boundaries of understanding, the boundaries of intelligence, the limits on survival of complex systems; then something else is required.

Points 7 & 8 – You actually need to hold on to the idea that I attempt to explicitly state in every piece that I write, that all of my statements are probabilistic, in every dimension of measure.
In that sense, I am far more aligned with Popper than with most other philosophers.

And I can see how what I am saying is almost impossible to interpret from within a classical box. One has to be willing and able to step beyond the implicit constructs, go beyond the nine dots to draw the 4 lines, but the dots of *Truth* are just such an implicit aspect of culture they are the invisible boundary, they appear as the matrix of being, rather than a proximal construct.

[followed by]

Hi Andrei,

As I said, it is very difficult, almost impossible, to break out of the box of *Truth*.
One needs experience of reality.
One must be prepared to give weight to that experience over one’s most cherished dogma (logic/interpretive schema).
It is not easy.

Someone like me, who does that, who has done so, and is left with probabilities (uncertainties) on all things, all observations, thoughts, conjectures (having some understanding of the processes by which they are generated), and having some understanding and experience of the world of the very tiny.
It is an uncomfortable journey, having all certainty removed, being left with profound uncertainty; yet it seems to offer greater security in the long run that the false hope given by the delusion of *truth*.

Reality doesn’t seem to obey philosophy’s rules in all cases.
Dan Dennett seems to be wrong, much as I like and respect Dan.

Deviant – yes – that I am.

Every new idea must be, by definition.
Does that necessarily make the notion less useful, less accurate of an approximation to our reality?
No – doesn’t mean that – necessarily.
I could be wrong. I have to admit of that probability. But in this case, I have tested it so deeply, and it seems to work, seems to pass all tests, even the tests of Wolfram’s logical systems.

Seems to me to be a close approximation to the best we can do.

Seems to require of us a little humility, a little respect, a lot of acceptance of diversity (in every dimension we can distinguish).

[followed by In a related thread]

Kind of agree and kind of disagree,
The degree of correlation is the issue.
Take the term snake as an example. For most people “snake” is a quite low resolution but very useful approximation to reality. What it means is something hard to see, that can be dangerous (if venomous or very large). For most people “snake” doesn’t contain much information about evolutionary history, embryological development, context sensitive behavioural strategies, biochemistry, anatomy etc; and even if they do, the accuracy of the correlation must necessarily be low. Our brains simply do not have the computational grunt to handle the level of detail involved.

Of course evolution has to select for Turing machines that on average over time compute solutions to real world problems within the time and energy available – so there have to be sets of heuristic shortcuts embodied in those systems at all levels. That is what it means to be a human being in a very real sense.

So there must be degrees of usefulness in our understandings, but the idea that those degrees could ever form a 1:1 correspondence with reality seems highly improbable to me; useful heuristic in a particular context – yes certainly that. But more than that – no, that seems very unlikely.

At the deeper level, there are indications that at depth it is simply not possible to say with certainty to the last degree, what is. Even if we could actually measure something with total accuracy (which seems unlikely) by the time we assembled that information into something our consciousness could perceive, it wouldn’t be that any more, it would be in some other state in many subtle aspects. So I have several levels of confidence, from both logic and experiment, that 1:1 accuracy is unlikely, ever.

And it seems that beyond that, reality seems to have limits of knowability that Basudeba has outlined clearly in previous posts. And in those aspects I agree with him.

And we agree on substantial parts – clearly.

For all those reasons, I prefer something that embodies the sense of “useful approximation” rather than hinting at anything more substantial than that. For me, Heuristic is a term that “fits the bill”.

[followed by]

Basudeba Mishra
For me it is a simple numbers game.
The smallest thing that our eyes can resolve, the smallest grain of dust, contains more atoms than any human brain is capable of consciously apprehending, yet we resolve it is a single bit of a more complex picture.

I can look at a TV screen and see thousands of little coloured dots flicking on and off, but I cannot both do that and see the picture they form. I can adopt one or other mode of interpretation (or other possible modes but I wont go into them here – 2 will do for illustrative purposes). Which one is correct?

Which is more useful?
Depends on context.
Am I looking at the screen for recreation, or as a technician looking for a pixel fault in a display?

Interpretation, utility, meaning, depends on context.
Evolution is ultimately the context of survival.
Will this experiment with big brained hominids work, or will they fall foul of sets of heuristics that worked for their ancestors, but are no longer appropriate in their exponentially changing reality?

The idea of *Truth* seems to me to be such a notion.
A useful heuristic that worked in many social contexts until quite recently, but now, with quantum mechanics chipping away at one side, and AI and virtual reality approaching from a completely different paradigm space, and complexity theory, maximal computational complexity, and notions like density matrices there seem to be a great many limits to our ability to “know” anything at any level beyond some sort of “useful approximation” relevant to the context.

I’m all for science, logic, computation, abstraction; and it has to come with a sort of humility that accepts a sort of fundamental ignorance that no AGI (Artifical General Intelligence) is going to be able to penetrate, for the levels of non-computable complexity do in fact seem to exist as real and fundamental aspects of being. Even such a simple thing as an irrational number like Pi – non-computable – but may be approximated to any degree of accuracy required, but never total.

[followed by]

Hi Basudeba Mishra,
I don’t believe I am mixing the two ideas of number, and I am very familiar with both.
If I say I see one spec of dust, what does that mean.
It is a reasonably accurate articulation of the fact that I managed to resolve some spec of something very near the limits of the resolution of my eyes.
That is a piece of information. 1 bit of information in at least two different senses.
It says something about something, but how much?

My scientific training is telling me that simple dust mote will contain some 10^10 molecules – at a minimum, and each of those molecules will contain atoms, and each of them quarks, gluons, photons etc. The amount of numbers involved in assigning momenta to all of those substituent parts would be more than my brain could handle in a decade, let alone all of the other properties involved.

Another example I often use. If we could somehow take a snapshot of all the atoms in our bodies, and blow it up to a size we can see, and we had been looking at it since the universe began some 14 billion years ago, at 100 atoms per second, we would be about 2% of the way through it.

If we could somehow take a movie of a single active enzyme site within one of our cells, for just one second of real time, then slow it down enough so that we could actually see the movement of the water molecules (and not simply blurs), then it would take some 30,000 years to watch that one second of video.

We are complex.
Hugely, vastly complex.

Yes our brains are amazing, and they are the result of many levels of complexity, and cannot possibly model such things in anything like real time.

And within certain scales of space and time, they seem to be very effective heuristic machines that let us accomplish amazing things.
I have had a strong interest in neural function for about 50 years. I have learned a little bit in that time. I make no claim to full knowledge, and I have got some very useful approximations to things that give me a reasonably good “broad brush” sketch of how the many levels of those hardware and software systems work to deliver the consciousness I experience – and it is very much a cartoonish sort of a sketch, not a high resolution electron micrograph sort of image.

So for me, to know something means simply to have some sort of approximation that is useful in context.

That seems to be how evolution works. All that evolution requires to work, in a very real sense.

Why would we have the hubris to expect anything better than that – ever?

[followed by]

[followed by and in a related thread]

Hi Frank,
I’m not so sure that there are absolutes in respect of reality.

Just think about the equations we use to describe things, most of them have Pi in them somewhere or “e”, both of which are irrational numbers – ie numbers that one may compute forever without fully defining – ie only available as approximations – to a certain degree of utility.

I have the first billion digits of pi on one of my NAS boxes, and have written some programs to do statistical analysis on them, and am happy that the sequence is a reasonable approximation to random numbers (but not – so to speak).

When you actually contemplate that the set of irrational numbers (unknowable numbers in a sense, the sense of knowing them with 100% accuracy) is a greater infinity than the set of rational numbers; and that reality seems to be fundamentally reliant on such irrational numbers; it is really hard to hang on to any notion of “absolutes”. One does in fact seem to be forced into accepting useful approximations – and without doubt many of those approximations are very useful indeed – they give us these computers we are all using, and the GPS network, etc – very useful approximations, but *ABSOLUTES* – no – I think not.

[followed by]

Hi Frank,

LOL.
I think if you lived in Kaikoura we might be close friends

I don’t fit well into any classical set of definitions available to philosophy. I seem to be an eclectic mix of the many paradigms and systems that I have encountered, making use of all the many levels and types of processing natively available to human hardware.

I think we are very similar, yet I find this distinction.

Consider, that if reality itself forbids simultaneous absolute information about both momentum and position, then one can make a reasonable case that reality forbids “ABSOLUTE TRUTH” even in principle.
And all we can do is make a probabilistic case for that assertion, and it seems to me that such a probabilistic case has in fact been made – beyond all reasonable doubt.

And at the same time, we can certainly get very useful approximations to many sorts of things, approximations that are good to the level of one part in 10^20 or better, so for all *practical purposes* True. Computers wouldn’t work unless that were so.

I’m all for determining within reasonable bounds what the probabilities present seem to be in particular contexts, and therefore what confidence limits we can apply to those contexts.
And I am very conscious of the many classes of systems that do not allow of any sort of computation or prediction (chaotic or seriously complex systems).

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Philosophy, understanding and tagged , , , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s