I have no problem with the postmodern rejection of *TRUTH*.
Evolution seems to work with probabilities and heuristics (things that are near enough to something to be useful in practice).
It seems clear that the classical notion of Absolute TRUTH has been falsified beyond any reasonable doubt.
So what is left?
Things that work well enough to be useful.
And it is the aspect of being useful that is important, and that many who go under the post modern label seem to reject.
Traditions are here because in some sense they worked in the past.
Does that mean they will necessarily work in the future?
Does it mean they are likely to be useful in the future?
In the absence of evidence to the contrary – yes.
But when you have evidence to the contrary – then it is time to reconsider.
Contexts can change.
Heuristics that once worked can fail to work, because of some change that is important at some level.
I’m kind of with Jordan, that we need a level of respect for the deep lessons of the past, at the same time as we need to be open to the possibilities of the future.
We need both.
Nihilism is not an option with survival potential.
Personally – I like the idea of surviving.
[followed by who wins?]
If you are looking at the deepest systemic levels, and on the longest time frames you can imagine, then it becomes clear beyond any shadow of reasonable doubt that our survival as individuals is dependent upon non-naive cooperation at all levels, and comes with other things like social and ecological responsibility.
These necessary boundaries enhance the possibilities available to freedom, even as they seem to constrain it at lower levels. It is weird how that happens.
Accept the necessary responsibilities, and freedom happens.
Try and claim freedoms that are not systemically available and chaos ensues.
Finding just where those boundaries are, in the chaos of conflicting incentives from culture, economics, and various forms of dogma, is not a trivial exercise.
The classical notion of *TRUTH* seems to be a conceptual model with a one-to-one correspondence to reality.
The deepest problem with that notion is that Heisenberg uncertainty seems to be telling us that it is impossible to know both of the fundamental pairs of information about reality past a certain limit. Thus one cannot know both position and momentum of anything *exactly*. That idea has passed many tests in reality, and seems to thus falsify the classical notion of *Truth* (beyond any shadow of reasonable doubt).
What one seems to be left with is contextually relevant confidence (heuristics), things that work reliably enough to be useful.
In a very real sense, that is how evolution seems to have assembled the 20 or so levels of complex cooperative systems that seem to be present in all of us (writing as someone with over 50 years interest in all aspects of biology, biochemistry, systems, evolution and complexity – including the cultural).
It is thus clear to me that none of us experience reality directly, we only each ever get to experience the slightly predictive model of reality that our brains subconsciously assemble for us. Many of the objects of distinction present in those models are implicitly defined by the cultural and conceptual entities we have encountered in our existence to date, while others are the result of deep genetic influences, and yet others the result of our individual creative aspects.
We are deeply complex.
That deep complexity has lead to a lot of errors in trying to create neat conceptual models (*TRUTHS*) about what we are.
Accept that we are profoundly complex systems, with no neat or simple answers.
If you want a good introduction to that – try Wolfram’s “A new kind of Science”.
One of the best introductions I know of into the nature of infinite complexity comes from Zen, and roughly translates as “for the master, on a path worth taking, for every step on the path, the path grows two steps longer”.
The more deeply one considers that, the more interesting it gets.
I’ve been playing with it for decades.
I was not implying that all interpretations are equal, or equally uncertain.
I am stating that all interpretations of reality will contain uncertainties.
Agree that we need to use the best methods available to us to arrive at the interpretation that delivers the lowest uncertainty in the context.
And there can be all sorts of modifiers to that, like time pressure, etc.
So it can be a very complex multivariant probability landscape, where model fidelity is traded against things like energy cost, time required, ease of social agreement, etc at both personal and group levels.
We need to accept that all individuals have the interpretations that they do, That does not mean that any of us have to give all interpretations equal weighting, and we do need to show some respect, as many of the interpretations in use have dimensions to them that few are aware of. That aspect is something that Jordan highlights exceptionally well.
I’ll try and keep this smaller than a book.
What I see is a great deal of complexity, many levels of systems all interacting, all with their own sets of strategies, feedbacks and influences.
To me many of the post modernists lack a sufficient depth of understanding of systems, particularly evolution and the structure and function of the human brain.
Popper proposed the idea that knowledge/intelligence might be something about comparing expectations to information and modifying actions accordingly. That seems to be a big part of how life works at many different levels, from the molecular on up.
As the classical world of mythology encountered the classical world of science (both views based in the same true/false type of simple logic) something happened.
There is a real sense in which we must all as individuals go through a similar sort of process.
We need to start with the simplest of possible distinctions and logics.
Children tend to start with simple distinctions, like heavy/light, hot/cold, light/dark, etc, then build to more complex.
Similarly we must all start from simple binary distinctions at all levels, like true/false, right/wrong.
There isn’t really any other alternative.
There are many instances of such simple systems in reality, but not all.
Look at cosmology for an exemplar, the simplest possible molecule is hydrogen.
Most of the matter in the universe is hydrogen, most in its simplest possible form, but also some of the more complex forms with 1 and 2 neutrons (deuterium and tritium).
It seems clear that initially it was all mostly hydrogen with a little helium and traces of lithium. Then stellar neucleosynthesis got underway, and we got all the other elements we see.
The same sort of thing seems to happen at every level of complexity, first it is mostly the simplest, then instances of greater complexity at that level, then the emergence of the next level.
As human beings we seem to embody about 20 level of that recursive process.
In terms of understanding, many people are still at the simpler ends of the spectrum of whatever levels of understanding are present.
And to be clear, even the simplest person is complex beyond the ability of any other person to understand in detail.
All any of us can do is essentially make line sketches of ourselves and others.
So in terms of where this all sits in the spectrum of systems and processes present in our society, it seems that we are all fundamentally reliant on cooperative systems at many different levels, and we are all capable of both competitive and cooperative responses to any situation, and the probabilities are largely determined by context.
In the sense of each of us becoming profoundly aware of our cooperative reliance on each other, that seems to be largely a bottom up process.
In terms of the major existing social institutions, like markets, money, finance, politics, etc they all seem to reaching a point where the fundamental structures that made them work as well as they did are changing, and we need to develop new ways of doing the many very complex functions that those institutions and ideas once did for us.
So I am hardly a supporter of the “establishment” for its own sake, as I see the need for profound change in our systems.
At the same time, I also acknowledge the profound complexity present in those “establishment systems”, so it is not an option just to destroy them and start again, not many people would survive an approach like that (if any).
We need to develop replacement systems and test them alongside existing systems, which may create some tensions.
I hope this gives more of a flavour of my thinking.
Hi Graham McRae,
I find myself agreeing with aspects of both what you and Philip Clemence wrote.
It really is complex.
Our brains are the most complex things we know of in this universe.
So there is a very real sense in which we need to trust what those brains deliver, at least enough to investigate, rather than handing all of our trust over to any set of systems or dogma or conclusions – be they religious or scientific or logical or anything else.
Thus, like Philip, I retain quite a skepticism of scientific and logical claims, particularly when those claims have economic or political or philosophical implications. I usually like to go back to source papers, and review the source datasets in some cases, and run my own checks over the analytic and deductive processes used, if my intuitions give me cause to do so.
And I agree with you, that not all opinions are equal.
We must each develop our own sets of trust relationships across all domains we can distinguish.
While I tend to favour trusting the scientific community over other communities, I have seen many examples of science being captured for political, economic and dogmatic ends, so it is only a probabilistic thing.
Looking at risk mitigation strategies in the broadest possible strategic framework, there are two major sets of risks to freedom from tyranny – the tyranny of the majority and the tyrannies of minorities. The only generally effective strategy against those dual threats is for every individual to assume responsibility for the creation of their own trust networks, at every level. As Jordan Peterson says, we each have our own hero’s journey in a very real sense.
To the degree that we find individuals truthful in all they say (whether we agree with their truths or not) then we can establish a degree of trust in their words (independent of any trust we may have in the conceptual systems beyond those word).
So it is a very complex, highly dimensional space of probabilities that we find ourselves in.
Being truthful lowers the dimensionality of the problem space.
Being able to detect untruthfulness increases the probability of utility from our conclusions.
Having good translation matrices between different domain spaces is a useful tool-set.
There are a great many different sets of assumptions out there in reality that different people use.
People can be truthful within their own domain space, and that can have utility for others, even if those others do not come from the same domain space, if there is a reasonably reliable translation matrix available.
When one accepts that as an operational conceptual space, then certain classes of problem that seem intractable from classical space do seem to resolve with useful probabilities.
Hi Graham McRae & Sebastian Bird,
I am not a strict determinist.
Strict determinism is not compatible with our current understanding of QM.
There does seem to be at the base of QM a demand for uncertainty.
Feynman classically used a “sum over life histories” approach to deliver a mathematical solution to the “ping pong ball” example – not a deterministic but a probabilistic solution.
To me, it seems clear that the evidence does not support a hard determinist interpretation. Thus holding onto such a position is not a matter of evidence, but rather of dogma.
You were quite open about that, and for that I thank you.
Knowing that, I can create a translation matrix that allows communication to the degree that communication between such fundamentally divergent paradigms is possible.
In that sense, what Sebastian said seems very close to something (to me).
If I recall correctly, using QM first principle calculations the computational complexity scales at the 7th power of the number of bodies involved. Thus even if one converted all the matter in the observable universe into computronium one couldn’t do a first principles numeric model of a human being without invoking simplifications.
Not all problems scale linearly with computational ability (in fact, in my world, none of the interesting ones do).
Some problems are really complex.
Some of those are really interesting.
One of the interesting problems happens when the games that one group plays changes the structure of the board that most people are playing on (thus fundamentally altering the rule set). Quite a bit of that happening right now – many different levels.
Have you considered the issue that anything universally abundant has zero market value (if you doubt that consider air – arguably the single most important commodity for any human yet of zero market value in most contexts due to universal abundance).
Now consider fully automated processes.
Any fully automated process has zero marginal cost of production, and therefor the ability to deliver universal abundance.
Yet doing so removes profit and value.
Thus, in the presence of fully automated systems, market values are directly in opposition to the values of most individual humans.
Serious issue – approaching very rapidly.
Now consider the implications on existing social institutional structures.
Has issues certainly – life does.
Of available scenarios I have investigated – this seems to offer least existential risk and greatest degrees of freedom (across the spectrum).
I am much less concerned with what might be true, as what works in reality to optimise the probability of survival (mine and everyone else’s).
In an operational sense, that can mean using heuristics that are quite a long way from *TRUTH*, but are much easier to calculate, and return probabilities that are close enough to those produced by *TRUTH* as to be operationally indistinguishable.
That seems to be what evolution has done in us and our culture. It has embodied behavioural systems that are a functionally useful approximation to optimal, even though in a narrative sense they are far from accurate.
When you look deeply into the strategies of long term optimal outcomes in a cooperative environment then it looks very like the operational outcomes of Christian theology. It works, but for all the wrong reasons.
Evolution doesn’t care a rats ar*e about truth, only about survival – and that usually has a least cost aspect to it in terms of time and energy.
As to climate change as an exemplar, to me it is almost a trivial problem. With the double exponential on growth of computational ability and a 2 year doubling time on installed solar photovoltaic systems, we are rapidly approaching the time when technical solutions to climate change will be simple to implement. If we stay business as of 2017 then it is a problem, but nothing in our society is static. Many of the key aspects are on exponential trajectories.
And there are many very real existential risks – highest among them right now is using markets and money to measure value in an age of fully automated systems. And there is a long list of others.
We are not short of interesting problems, nor are we ever likely to be.
A strong argument can be made that up until quite recently the power of markets to distribute decision making and risk, and to reward innovation, and to efficiently allocate scarce resources, was very real, very powerful, and had developed multiple levels of complexity.
But none of that changes the fact that markets deliver a scarcity based value measure, and cannot deliver a positive value for universal abundance.
In the distribution sense of markets and money that isn’t a serious issue, in the planning and money generation sense it is as serious as it gets.
It lead inevitably to the elimination of freedom for the majority, and the production of a tiny elite who control everything.
That isn’t stable or safe for anyone.
[followed by in another subthread]
Rejection of the classical notion of *Truth* in any sort of absolute sense, is sensible.
Simultaneously rejecting any sort of probability of utility or correspondence is not.
Understanding the many different sorts of complexity present is required.
Some things do approximate simplicity.
Some things are more complicated.
Some things are truly complex, and one must engage with them in an iterative dance.
Some things are truly chaotic and unpredictable, and must be avoided if survival is important to you.
Survival is important to me.
Nihilism is to be avoided – it is deeply dangerous.
The post modern tendencies to nihilism show profound ignorance of complexity, computation and systems more generally.
Such willful ignorance is a severe existential risk – on that Jordan and I agree.
The certainty that comes from over simplification is an existential risk to all. Many in the postmoderm set seem to display that.
One must be willing to challenge any *truth*, and one must be able to use evidence over dogma in making such assessments as to likely utility.
You are confusing two things.
Yes – there is reality, whatever that actually is.
It will obviously have whatever attributes it has when it has them.
That we do not disagree about.
That is not what is at issue.
What is at issue is the human perception of reality and the understanding of relationships derived therefrom.
If one looks purely at the physical, at particles, and follows the train of scientific evidence, one is taken to Heisenberg uncertainty, which seems to express a limit with which one can know both position and momentum. This is a level of uncertainty that seems to be fundamental.
It is only one of many such sources of uncertainty.
If one enters into the study of human biology, of the structure and function of our sense organs, our neural systems, and the relationships of the many levels of very complex systems therein, then one becomes aware of many more profound levels of uncertainty and bias in the relationship between reality and our perception of it.
It now seems clear, beyond any shadow of reasonable doubt, that we have no direct perception of reality, but that our perception as conscious entities is of a subconsciously constructed model of reality that is slightly predictive in nature (between 15 and 200 ms depending on various factors).
What gets created in that model is partly a function of our genetics, partly a function of our culture and language. partly a function of our experiences of reality, and partly a function of our conscious and subconscious actions, choices, and creativity (and creativity often involves what some would consider error at some level).
Our understanding of reality is an abstraction at some level of this subconscious model.
Thus all of our understandings are at best a model of a model.
The idea of “TRUTH” is an expression of correspondence between the model and the thing it models.
The idea that we can achieve perfect correspondence is a simplistic one.
The more one starts to gain an appreciation of the levels of complexity actually present, and the sheer number of complex systems interacting, the more one must accept that all of our models are some low resolution approximation to something.
Thus, I am clear, beyond any shadow of reasonable doubt, that the very notion of “TRUTH” has been falsified, and all that is left is heuristics – useful approximations that are contextually relevant and sufficiently reliable.
That seems to be what reality allows us to have.
Any attempt to go beyond that seems to imply some combination of childish simplicity or ignorance or hubris.
All exist, in all of us.
Starting to notice where and when they express is part of the path to growth.
Responsible adults need to go past them, and accept uncertainty and the responsibility to use the predictive intelligence of their brains rather than follow any set of simplistic rules without thought.
And I can understand the reluctance to take on such a burden.
The security of our childish certainty doesn’t exist there.
We must learn to live with profound and perpetual uncertainty, profound responsibility for our individual choices and actions.
And when that is accepted, one can create degrees of confidence on the other side of it.
The greatest degrees of confidence possible seem to come from the integrity of the trust relationships one builds with other sapient entities, if one truly is committed to individual life and individual liberty, applied universally to all sapient life, to human and non-human, biological and non-biological.
Hi Philip Clemence,
I too am a skeptic.
In my personal world, I don’t do the classical notion of “TRUTH” – as being a hard, eternal, absolutely certain, 1:1 correspondence with reality.
And like all words in the English language, the notion of truth can have multiple interpretations, which can and does lead to a great deal of people talking straight past each other, particularly if two people who each believe a word has only one meaning are talking, and each has a different meaning.
Yet in philosophy, many philosophers adopt the hard classical definition of truth, which is of something eternal and changeless (meanings 4-9 of true in the Oxford).
In terms of the use of the word “Truth” in respect of argument, it doesn’t apply to perception, but to understanding; to a state of mind that refers to the state of some aspect of reality or some abstract concept or set.
Leaving aside the abstract references to concepts that have no direct referent in reality, and considering only those aspects of human knowledge that have direct or relatively short indirect chains of referents to reality; then the classical notion of truth in terms of knowledge implies a one to one correspondence between the understanding in the mind of the person and the state of reality.
That is where my argument from the previous post started.
What is generally referred to as Heisenberg uncertainty seems very clearly to state that one cannot pin reality down with absolute certainty. Reality seems to contain fundamental uncertainty, and all knowledge of reality must therefore contain aspects of such uncertainty.
Now for very large collections of things, such uncertainty may be very small, small enough that it is unlikely that any living human would have encountered it directly, but never actually zero. A close enough approximation – a useful heuristic, but not an absolute “TRUTH” in the classical sense.
I have been working with computers for over 40 years, have operated a software company for over 30 years, and have a degree in zoology, with biochemistry and ecology as majors. So I have a reasonable familiarity with many of the aspects of reality about which we can have very high confidence, and also many aspects about which confidence is very much lower.
I find when arguing in fora where I am likely to encounter philosophers, that I not use the term “TRUTH” as it is likely to be interpreted in the hard classical form, and that form I reject as having been falsified, beyond any shadow of reasonable doubt.
Rather than use the term truth in the softer probabilistic form that is perhaps more common in normal speech, I prefer to use the term heuristic, which rather than relying on any aspect which is eternal and unchanging, is more about invention or discovery of something that is useful in a particular context.
So for me the notion of heuristic embodies the notions of situational utility and confidence rather than any sort of absolute.
That aspect, of being sufficiently reliable to be useful in some particular context (or set of contexts) seems to be how evolution has actually constructed our brains, and how knowledge actually works in practice for us in our existence in reality (whatever reality actually is).
Does that create clarity or murk?