Foundations of Logic – faith, belief and skepticism

Foundations of logic –

Scott Brizel’s very good distinction between faith and belief, and skepticism

That captures a lot Scott, but misses some stuff also.

It seems that there is an infinite set of possible logics, from the binary to the probabilistic.

It seems that all of the many levels of our perceptual and sense making systems are the result of the differential survival of variants across a spectrum of contexts.

Have you considered the possibility that all “Truths” are but heuristics – things useful but not necessarily universal.

There is no requirement for “reality” to obey any particular set of logics. And logic and mathematics are the best abstract modeling tools that we have. That doesn’t mean that any model we create is necessarily accurate in any particular context.

And one precondition of understanding this perspective, is getting that our experiential reality is conditions by the heuristics of our subconscious systems. Evolution doesn’t need to be perfect, it only requires that what survives is better at surviving in the contexts experienced; and that is a vastly different thing – highly dimensional.

[followed by]

Hi Scott,

I like Eric Weinstein’s 2 rules for intelligent discussion:
1. If a very smart person is saying something obvious, then he should be assumed to be saying something subtle until proven otherwise.
2. An intelligent person who is saying something wrong, should be assumed to be saying something counter intuitive, until proven otherwise.

Rachel Garden did an interesting paper published in the International Journal of Theoretical Physics Vol 35 No 5 (1996) on Logic, States and Quantum Probabilities.

That paper seems to me to point to a logical system different to classical logic. It seems possible that there may be an infinite class of such logics. And it also seems much deeper than that – something of an eternal tension between order and chaos at all levels.

My point was attempting to point out that our “tools of reasoning” seem to be deeply complex, and not necessarily logical in a classical sense. And that is not something which is easy to explain when no vocabulary exists to explain it, as it is by definition outside of the “box”.

[followed by]

Hi Scott,

I think you are very intelligent.
I thought there was a lot of merit in what you wrote.

I was trying to point to something both subtle and difficult about the influence of the heuristics embodied in the many levels of our being on the functioning of the systems that are us, and the very subtle ways that such things influence what we think are “reasonable” or “useful” or “probable” or “evidence” or “proof” or even “logical”.

It isn’t a matter of “short sighted”.
It seems very probable to me that the simplest level of human abstract thought in language requires 16 levels of cooperative complex adaptive systems, and each level has many instances of complex adaptive systems each tuned by evolution by a combination of randomness and their particular life histories.

It seems very likely that such a process leads to some contexts where those systems deliver very reliable outputs, and some where the reliability is much lower.

By definition, those systems are much more complex than we can possible consciously deal with in detail, so we require simplifying models to get any sort of a “sketch” of an understanding of what they (we) are, and what the sorts of errors and biases might be they deliver to our perceptual reality (and thereby to any set of abstractions we may derive from that reality, to any level one wishes to abstract).

And it is a really context sensitive set of “understandings” and “abstractions”. The analogy I often use is that of the idea from our history that the earth is flat and we are at the center of the universe. If the farthest one travels is 200 miles, and one is mostly building things out of lumber using a ruler (which is mostly what our ancestors did for thousands of years), then the idea that the earth is flat works. It is a useful heuristic. Carpenters today still use it in practice, as do map makers of cities.
The more accurately one measures things, or the further one travels, the greater the difference.
By the time one is sailing around the world, one needs the idea that the earth is roundish to have a reasonable probability of getting somewhere near where you intend to go.
By the time we get to GPS satellites for navigation, then we need to be thinking in terms of curved space-time and quantum mechanical interactions between particles to make the GPS technology work.
That is an example of three successive levels of approximation, each workable and accurate within the contexts and limits of measurement of their respective frames.
I strongly suspect that process is capable of infinite extension and recursion.

The understanding I have of the evolutionary process is a systemic one, that “sees” that all new levels of complexity result from the instantiation of new levels of cooperation, and that all new levels of cooperation require attendant sets of strategies to detect and mitigate invasion by cheating strategies (and that in itself can become a strategic ecosystem at every level).

So there is a very strong sense in which we seem to be very strongly aligned on the need for skepticism. And, in the context of having explored several different types of logics, I am also taking that notion, and recursively applying it to the notion that one should not put undue reliance upon any particular type of logic in any particular context if it seems that context might have gone beyond the limits of its tested utility. And certainly, it is a useful approach in some contexts, just as the notion of the earth being flat is a useful approach in some contexts. And one needs to also be alert to the possibility that we may have “traveled” a sufficient “distance” that the old “map making tools” that served us very well, may no longer work as reliably as they did, due to “curvature” of the “systemic space” we inhabit (a bit like the mathematical notion of torsion applied in a curved space-time inherently breaks symmetry).

Posted in understanding | Tagged , | Leave a comment

Quora – arguments not evidence

Is “arguments are not evidence” actually a thing in science?

Hardy Jonck gives a good answer, and like most things, the answer to this question is highly context sensitive.

Everyone has to start out with simple understandings.

Everyone has to start out believing ideas like True/False.

The more time one spends investigating all the different branches of science and understanding; the more likely it is that one will shift away from such simple hard binary distinctions, to a more “relaxed” kind of knowledge that is based in contextually relevant sets of probability distributions.

In many practical contexts, the probabilities are very close approximations to “True” or “False”, such that one doesn’t normally consider the difference in practice, even as one is always intellectually conscious of the difference.

So in that sort of context, where one is conscious of many levels of eternal uncertainty in all things, from domains like non-binary logics, Goedel incompleteness, irrational numbers, maximal computational complexity of systems, halting problems, etc (it seems probable that the class of classes of such things may be infinite); then one moves into a modern scientific understanding – which is not about Truth, but about getting models of whatever this reality we find ourselves in actually is, that are sufficiently useful in the context that they work in all experimental situations (right up to the point that they don’t, thus requiring further exploration of the set of all possible explanatory frameworks to find one that works with the newest set of data as well as all that came before).

In this context, David Snowden’s Cynefin Framework for the management of complexity is the best simplification of what seems to be an infinitely complex domain space that I have come across. If one applies it, to all levels of understanding, then one can start to build some useful models.

“True” science always involves both evidence and arguments, and sometimes the arguments about the nature of evidence, and the nature of understanding are every bit as important as the evidence itself.

And when one does the hard work of deeply investigating evolution, the geological evidence, the games theoretic contexts, the biochemistry, the animal and plant behaviours, AI and neural networks, etc; then one can begin to get a useful model of the deeply nested sets of complex adaptive (and contextually sensitive) systems that go into producing this experience we have of being human – of having the understandings (models) that we do. And it seems that every conscious individual lives in the experiential reality created by their subconscious brains, and every brain is in a body that is in a “reality” that is complex beyond the ability of any brain to deal with in detail. So reality demands that we make simplifying assumptions at many different levels. So nobody actually “Knows” what reality is, and some models are much more useful and likely to survive in some contexts than other models (more complex models take a lot of time and energy to develop, and sometimes that cost is too high in some contexts – thus simpler models survive better – it is extremely complex at many different levels).

Post modern philosophy gets some aspects of this complexity, but many tend to take that and “spin” it into a “nihilism” that displays no appreciation of the evolutionary context in which they exist. Every one of us is from a lineage of survivors. All of our ancestors survived at least long enough to leave offspring that also left offspring in the particular contexts of their specific existence. That is a non-trivial filter on the sort of stories we tend to tell ourselves. People who go down the Nihilist path tend to ignore that very important aspect of reality.

In some very fundamental and very important ways, every aspect of our intuitions, our emotions, our religions, our culture, our ethics, our stories, have been deeply tuned by the survival of things. Ignoring that is simple stupidity, often with large overlays of egoistic hubris.

We are a socially cooperative species.

We are the most socially cooperative species this planet has produced.

The evolution of complexity is predicated on cooperative behaviour that has associated behaviours that detect and effectively remove non-cooperative behaviour.

And we live in very complex times, when many of the old models, and many of their over simplifications of complex systems, are now causing existential level risk to us all. Perhaps chief among them is the idea that greed is good. Not true!!! It is deeply more complex than that, and the evidence for that is beyond any shadow of reasonable doubt.

Posted in understanding | Tagged , , , | Leave a comment

Quora – Ethics and technological advancement

If ethics were forgotten, what scientific advances could we see within the next 10 years?

Depends what you mean by “if ethics were forgotten”!

If ethics are completely forgotten, then we will most likely destroy ourselves.
Contrarily – if various levels of “ethics committees” are reigned in, and competent individuals and teams are given the freedom and the resources they need; then we could solve most of the problems we have in the next 10 years (global warming, indefinite life extension, all the essentials of a reasonably high standard of living available to all people on the planet, ecological sustainability, etc) just simple stuff like that.

And we are a social species, and ethics and individual responsibilities are a fundamental part of what allows societies to function; but not the sort of ethics which we tend to get from ethics committees which often tend to be essentially the dominance of the competent by the incompetent; but rather ethics that acknowledge that we are all part of a cooperative society, and we all have a responsibility to support social and ecological cohesion (which is a very different sort of thing).

Ethics are essential, and the best sort of ethics is that of free individuals who acknowledge that their freedom is optimised when every member of society is reasonably cared for; so a sort of “enlightened long term self interest”.

Posted in Our Future, Philosophy, Technology | Tagged , , | Leave a comment

Trap.nz – homebuilt traps

Trap.NZ homebuilt traps question

Lots of other good replies here.

I just managed to eradicate mice from a 2.1Ha predator fenced site by using classic wooden mouse traps (48c each from Bunnings warehouse – 140 of them) housed under recycled 3l plastic orange drink containers from our local recycling ctr, put through my bandsaw to leave a flat base from the mouse sized opening, and secured in place with a staple made from #8 wire bent to shape by hand.

Took me about 300 hours over 4 months but cash outlay was under $300 (including replacing a bandsaw blade). Had 117 traps at peak (quite a few failures, but relatively low rate in the grand scheme of things), 250 mice caught in total, 23 on the peak night. 3 jars of Pics peanut butter used.
Using clear plastic bottles means checking traps is quick, just a glance tells you if they have been set off.
Bycatch was minimal (3 skinks and about a dozen snails).
Ants were an issue in some places, but borax sugar mixture in old beer-bottle tops solved that – only about 10% of trap sites had the issue.

Near the end had one blackbird figure out how to get a free feed of peanut butter, but it wasn’t an issue overall. It only happened because I hadn’t flattened that trap site well enough, and there was a small depression allowing the blackbird to get under the plastic and worry away at the trap until it work it out from under the cover (got trailcam footage of it performing this trick).

As others say, depends what you want to get, and how much you value your time ;), and how much you want rid of whatever you are trapping.

Posted in Nature, Technology | Tagged , , | Leave a comment

Laurie’s Blog – Pucker Up

Pucker Up!

What’s your favorite fruit or essential oil?

Hi Laurie,

I’m not a grapefruit fan, though I do like most other fruits. Passionfruit are perhaps my favourite.

I do recall one interesting incident in respect of grapefruit from when I was about 15. We lived in an old house on a farm at the time, and there was a big grapefruit tree outside my bedroom window. Mum loved grapefruit, but so did the possums.

One particular night I woke to the sound of possums fighting, and looked out the window to see dozens of them in the tree in the moonlight.

Next morning I went out, and there were hundreds of naked grapefruit hanging in the tree.

The possums had eaten the skin, but left fruit.

After that, mum encouraged me to lower the possum density, so I took on hunting them, and killed several hundred from the farm over a period of a couple of months, but those two images of a tree full of possums in the moonlight, and of naked grapefruit hanging from the tree in the morning sunshine, have remained clear in my memory for the last 50 years.

Posted in Laurie's blog | Tagged | Leave a comment

Deep Code – Jordan Hall

Technological Unemployment – is this time different? | Deep Code Experiments: Episode 13

Jordan Hall – Neurohacker collective

Hi Jordan,
Late comer but enjoyed the series largely up to this point, but this video seems to miss the major issue present.

Up until this point, the tools were replacing skill sets; so there was always the ability to innovate.

This time it is fundamentally different, because now the tools are themselves exploring the space of computation and innovation. The tools are now tools of computation, and they are being recursively applied, and thus their growth is on a double exponential.

Like all tools, this in itself is neutral, it is what we do with it that matters.

Adam Smith identified a long time ago that cooperative labour was far more productive than individual labour – using the example of the production of pins.
Our ability to automate any process has taken that to an entirely new level – as you noted in episode 2, the class of the “anti rivalrous” is growing exponentially – with profound implications.

We need this new level of automation, to allow us to fix all the relationships we have broken with the many non-scaling dependencies we have developed in our existing systems, due to many different sorts of founder effects, some of which you covered well in Tainter plus model (though our situation is far more dimensional than that model implies, it is definitely part of the picture).

Our biggest issue now is conceptual.
We are so used to thinking in terms of money as a useful measure of value, it is very hard for many to think beyond it.

We are also doing something that appears never before to have been done (at least not on this planet), which is to instantiate a new level of “coherence” (to use your term, but to me the term doesn’t map well to what is happening) from agents that are more or less self aware. We are doing so in a context where we are also instantiating agents that are profoundly more competent (though not yet as energy efficient) than we are.

We need the technology, to solve the many profound levels of problems we have created with the competitive market systems that have dominated our thinking for the last few hundred years.
But unleashing that technology to mindsets still conditioned and bounded by the implicit assumptions of market value is a systemic guarantee of existential level crisis.

This is seriously complex territory.
It is not like the simple strategic situation of a four layer model, but is much more like sets of strategic ecosystems.

There can be no guarantees in the face of such complexity and fundamental uncertainty; and there can be reasonable levels of confidence if we look deeply at the strategic systems in biology.

We now have the real ability to instantiate new levels of degrees of freedom.
The need for labour to maintain our systems is disappearing.
Individuals will be able to have real freedom, many for the first time in their lives, and that will come with responsibilities, and there will be limits (but they will be reasonably generous by most standards). And that will be profoundly unsettling for many individuals – it will be entirely novel, and novelty can be dangerous, and is always unsettling in many aspects.

So many of the ideas you promote here, about distributed governance and distributed agency in particular, are essential aspects of stable solutions; and it is very much deeper.

The idea of satisfiers is one aspect of the far more highly dimensional structure of valances generally, particularly in the context of the evolution of sets of context sensitive valences vs the space of the set of all possible valences that allow a reasonable probability of survival.

Part of that is the distinction that many of the sets of valences evolution has instantiated within us may not be particularly well suited to survival in our current reality, while others may be deeply relevant in ways that very few can yet begin to “see”.

Another part is seeing that the very idea of “Truth” is a simple approximation of something profoundly more complex that contains fundamental uncertainty at every level.

The idea of “Truth” must be relaxed, to something more closely approximating “contextually useful approximation” if we are to get any sort of agreement across systems that vary in complexity by orders of magnitude.

I align with many aspects of your approach, and have talked with Daniel a bit about some of these issues.

I agree this subject matter is important – and it is difficult to approach some of these ideas in ways that have any reasonable probability of being interpreted as anything like what is intended by anyone who has not had several years of interest in evolution, complex systems, coding, and AI.

Posted in Our Future, Philosophy, understanding | Tagged , , , , | Leave a comment

Foundations of Logic – Belief

Foundations of Logic – Facebook Group

Mark Posted:
People believed x, x was wrong.
People believe y, so y is wrong.

Michael responded:
It’s more fundamental-
People believed, belief is wrong!

Pawel asked:
is it wrong?

Dirk responded:
The processing human mind works in a Bayesian way…

Hi Dirk and Pawel,

It now seems clear that we as self aware humans occupy at least two different sorts of realities.

One is the physical reality, whatever that actually is, and we use models of things like matter and energy and quantum mechanics and relativity etc to build the intellectual approximations we have to it; and at a lower level, evolution seems to have supplied us with sets of heuristics at multiple levels that allow us to survive in it.

The other level is our experiential reality.
This seems to be a subconsciously created model of the physical reality (whatever physical reality actually is).
As conscious entities, we only ever get to directly experience our individual personal models of reality, never reality itself.
We are starting to understand some of the many mechanisms that influence the structure of that model.

At a more abstract level, all models tend to start out with simple distinctions, and develop greater degrees of distinction.

The simplest distinction one can make is a binary, thus splitting an infinity at some (arbitrary but agreed region) into two categories (like True/False; Good/Bad, Right/Wrong, etc).

As we build more complex models, we begin to grasp distinctions like Bayesian probability, and non-equilibrium complex systems.

And all systems had to start somewhere, and “belief” is as good a term as any for the starting point of any modeling system.

Some systems insist upon the retention of “belief” at some level and thus become self limiting in the “territories” available for exploration.

Some systems encourage challenge of assumptions, and thus open infinite realms of possible territories to explore. I am firmly in this camp, and being there, I acknowledge that there are many useful heuristics encoded in aspects of most long lived belief systems (if there weren’t then they wouldn’t have survived).

So there are multiple levels of very complex Darwinian processes present; as well as all the levels of models and abstractions; and nested sets of infinities with all the necessary unknowns and uncertainties. A very different sort of world from the classical one where “knowledge” and “Truth” were thought to be real and known; rather one where both constructs are seen as simplistic but often useful illusions.

[followed by ]

Hi Pawel,

Your questions are what drives most of AI research, and are what many consider to be the “hard problem” of consciousness.

I am with Dirk, of course there is relatedness, but what is important is the balance between the degrees of relationship and the degrees of isolation, and that is often deeply embodied in the mechanics of the particular systems that are instantiated.

Visual perception is a very interesting case. In very broad sketch terms, a photon is absorbed by a carbon bond in a short pigment molecule that is present in a valley in a much larger protein molecule. The change in bonding caused by the photon interaction causes a sequence of shape changes resulting in the protein shifting its alignment in the lipid bilayer membrane within which it is embedded resulting in the opening of an ion channel, and thus an electrical “spike” is generated. But that is only the start of a very complex series of relationships that start in the retina. A lot of processing happens in the retina itself. Edge detection happens there. That initial production of an electrical “spike” does not immediately go to the brain, but is highly processed in the retina to produce a series of FM signals. The cells of the optic nerve have a natural rate of firing, and the signals from the retina alter this frequency. Biology has found the same as radio techs, that FM signals have greater fidelity than AM. One of the “upshots” of that being that very small differences in phase relationship become profoundly important in signal processing deeper in the brain. It is far from simple!!!

So certainly, there is influence, many layers of very complex sets of influences.
Many levels of deeply encoded heuristics that bias our networks to recognise particular types of signals – like faces or words etc.
Our brains are far from simple.
The chemical modulators of neural activity are profoundly complex.

The ideas that “Physical reality is made of particles, fields and forces” is a very useful set of heuristic approximations to whatever reality is. That doesn’t mean that reality “is” those things; and it does mean that those ideas give us useful models of what reality is in many contexts.

What we experience is models of reality embodied in the electro-chemical activity of our brains. To a very large degree I align with Dan Dennett on this, but where Dan and I separate is in the degree of “causality” necessary for such systems to exist.
Dan seems to be very much in the “hard causality” camp; where as I am in the “probability” camp, with the probabilities tending towards a mixed system that involves a fundamental tension between the lawful and the random (order and chaos) at all levels – which at every level resolves to some level of constrained randomness at the level of the individual, and delivers much greater reliability at the level of large populations (either over time or groups or both).

The experienced is the physical in a sense, in the same sense that any computer program running on a computer is the flows of interactions embodied within that system. We are the same in a very real sense, what we experience as “red” or “pain” or “beauty” is the embodied relationships of the electrochemical systems that are our brains. And that is far from simple.
Even the simplest human brain capable of reading this post has at least 15 levels of systems, with uncertainty existing at the boundaries of every level and between systems within every level – relatedness and influence certainly there also – and it is the relationship, the balance, between those two that is critical. Too much order (too strong a relationship) and there is no freedom of action – only “simple” automata. Too much chaos, and there is insufficient structure to maintain the boundary conditions necessary for higher order function to exist. This tension plays out at every boundary, between every level, and between every system within every level. It is amazingly beautifully complex (a mix of the simple, complicated, complex and chaotic to use the terms of Snowden’s Cynefin framework), and it seems to be what we are.

The modulators of uncertainty, from the quantum level on up, are important at every level. Sure, there are many aspects of the systems that work at the population level at which level one can usefully ignore the uncertainties most of the time. But one cannot understand the entire system without understanding the necessary function of those uncertainties in establishing “freedom” of action (at least to the degree that such freedom exists and is deemed desirable). And I spent a large chunk of last year trying to create conditions where I could get Trick to “see” that, and I failed in that endeavour. Trick is a really bright guy, I wouldn’t have spent the time otherwise, but I wasn’t able to create a context that worked in allowing him to “see” something roughly equivalent to what I “see”. I am looking forward to having direct neural interface to silicon systems, so that I can increase the bandwidth of communication and alter those probabilities.

[followed by]

Hi Pawel,

Short answer – AI research is working out what consciousness actually is. And yes – it is a soft process.

For over 40 years I have thought about everything in terms of probabilistic processes. If you think about even the simplest of possible systems, a linear two state array, it isn’t as simple as it seems. You have the complexity present in rule 30 – which is kind of the obvious non-simplicity, but the much deeper issue is the phase transition problem – how does such a system change states?
Whenever one looks deeply at systems, it is usually in the phase transition matrices that the real interest lies, and where small differences in modelling assumptions give the greatest difference in system behaviour.

Sure, brains are messy systems, which like all such things is both strength and weakness.
And when you look from a systems perspective, everything is information interacting (even matter and energy – that is one of the weirdest things about QM, but that is a long story).

And one the the great things that Turing did, was develop the notion of general computation systems, and Turing completeness.

Biology has a different set of constraints.
It isn’t concerned with Turing completeness, but with delivering outcomes that have better survival potential than any of the alternatives, which always have some complex function which involves the degree of approximation to some sort of optimal solution (across the probability spectrum of the contexts involved), the energy costs involved in delivering the solution (taken over both the entire lifecycle of the organism, and the computation of that particular context), and the time taken to deliver the particular solution (again with both lifecycle and particular context attributes). These are very complex functions that are very context sensitive; and can lead to low probability high impact contexts having a significant impact on the makeup of complex systems (even if they only occur once every 50 generations or so). In our case the solution space does appear to involve Turing completeness, but it is not without biases – and thus involves non-trivial complexity.

So the biases present in our neural networks, that allow us to learn common things from a relatively small sample set (where unbiased networks require sample sets of hundreds of thousands to achieve similar fidelity) can be (are) tuned by things about which we often have no direct (or even indirect intellectual) knowledge.

Thus we are deeply tuned by evolution for survival in ways that no AI can possibly be, until it has lived at least a hundred thousand years.

So while we may be able to intellectually model the sorts of processes present that deliver our experience of consciousness, we are a very long way from unpicking all the details of the fine tuning of the probabilities to action actually encoded in the embodiment of being human, as distinct from being any particular sort of AI.

So I see a very real space for the coexistence of humans and AIs in ways that are mutually beneficial, and mutually respectful. And that doesn’t involve the concept of control, but rather the concepts of respect and long term mutual self interest (something the zero sum game folks have a great deal of difficulty imagining).

So not sure if this is the sort of conceptual context you were looking for as a framing for an answer to your question or not; and it is the best one I can reasonably make available at this time and in this context.

Posted in Ideas, understanding | Tagged , , | Leave a comment