**Scott Brizel’s very good distinction between faith and belief, and skepticism**

That captures a lot Scott, but misses some stuff also.

It seems that there is an infinite set of possible logics, from the binary to the probabilistic.

It seems that all of the many levels of our perceptual and sense making systems are the result of the differential survival of variants across a spectrum of contexts.

Have you considered the possibility that all “Truths” are but heuristics – things useful but not necessarily universal.

There is no requirement for “reality” to obey any particular set of logics. And logic and mathematics are the best abstract modeling tools that we have. That doesn’t mean that any model we create is necessarily accurate in any particular context.

And one precondition of understanding this perspective, is getting that our experiential reality is conditions by the heuristics of our subconscious systems. Evolution doesn’t need to be perfect, it only requires that what survives is better at surviving in the contexts experienced; and that is a vastly different thing – highly dimensional.

[**followed by**]

Hi Scott,

I like Eric Weinstein’s 2 rules for intelligent discussion:

1. If a very smart person is saying something obvious, then he should be assumed to be saying something subtle until proven otherwise.

2. An intelligent person who is saying something wrong, should be assumed to be saying something counter intuitive, until proven otherwise.

Rachel Garden did an interesting paper published in the International Journal of Theoretical Physics Vol 35 No 5 (1996) on Logic, States and Quantum Probabilities.

That paper seems to me to point to a logical system different to classical logic. It seems possible that there may be an infinite class of such logics. And it also seems much deeper than that – something of an eternal tension between order and chaos at all levels.

My point was attempting to point out that our “tools of reasoning” seem to be deeply complex, and not necessarily logical in a classical sense. And that is not something which is easy to explain when no vocabulary exists to explain it, as it is by definition outside of the “box”.

[**followed by**]

Hi Scott,

I think you are very intelligent.

I thought there was a lot of merit in what you wrote.

I was trying to point to something both subtle and difficult about the influence of the heuristics embodied in the many levels of our being on the functioning of the systems that are us, and the very subtle ways that such things influence what we think are “reasonable” or “useful” or “probable” or “evidence” or “proof” or even “logical”.

It isn’t a matter of “short sighted”.

It seems very probable to me that the simplest level of human abstract thought in language requires 16 levels of cooperative complex adaptive systems, and each level has many instances of complex adaptive systems each tuned by evolution by a combination of randomness and their particular life histories.

It seems very likely that such a process leads to some contexts where those systems deliver very reliable outputs, and some where the reliability is much lower.

By definition, those systems are much more complex than we can possible consciously deal with in detail, so we require simplifying models to get any sort of a “sketch” of an understanding of what they (we) are, and what the sorts of errors and biases might be they deliver to our perceptual reality (and thereby to any set of abstractions we may derive from that reality, to any level one wishes to abstract).

And it is a really context sensitive set of “understandings” and “abstractions”. The analogy I often use is that of the idea from our history that the earth is flat and we are at the center of the universe. If the farthest one travels is 200 miles, and one is mostly building things out of lumber using a ruler (which is mostly what our ancestors did for thousands of years), then the idea that the earth is flat works. It is a useful heuristic. Carpenters today still use it in practice, as do map makers of cities.

The more accurately one measures things, or the further one travels, the greater the difference.

By the time one is sailing around the world, one needs the idea that the earth is roundish to have a reasonable probability of getting somewhere near where you intend to go.

By the time we get to GPS satellites for navigation, then we need to be thinking in terms of curved space-time and quantum mechanical interactions between particles to make the GPS technology work.

That is an example of three successive levels of approximation, each workable and accurate within the contexts and limits of measurement of their respective frames.

I strongly suspect that process is capable of infinite extension and recursion.

The understanding I have of the evolutionary process is a systemic one, that “sees” that all new levels of complexity result from the instantiation of new levels of cooperation, and that all new levels of cooperation require attendant sets of strategies to detect and mitigate invasion by cheating strategies (and that in itself can become a strategic ecosystem at every level).

So there is a very strong sense in which we seem to be very strongly aligned on the need for skepticism. And, in the context of having explored several different types of logics, I am also taking that notion, and recursively applying it to the notion that one should not put undue reliance upon any particular type of logic in any particular context if it seems that context might have gone beyond the limits of its tested utility. And certainly, it is a useful approach in some contexts, just as the notion of the earth being flat is a useful approach in some contexts. And one needs to also be alert to the possibility that we may have “traveled” a sufficient “distance” that the old “map making tools” that served us very well, may no longer work as reliably as they did, due to “curvature” of the “systemic space” we inhabit (a bit like the mathematical notion of torsion applied in a curved space-time inherently breaks symmetry).