Two very different classes of consideration here.
One class of considerations is in respect of sets of premises and consequences. In this class of propositions, one can make claims about consequences following from propositions, and substantiate them with argument, and make claims about the truth or falsity of such propositions with considerable confidence.
The sets of premises and propositions possible both seem to be infinite classes.
One can construct infinite classes of possible logics (of which Boolean logic is the simplest) from such a set of propositions.
The other (and more common outside of the set of logicians) application of the notions of truth and falsity apply to “reality” (whatever it actually is – this matrix we find ourselves in, and seem to be part of).
In terms of our building models of reality, we can postulate premises, then design experiments designed to falsify those propositions, and can come to some determination of the degree of confidence we have in the many aspects of the experiment (design, operation, measurement, interpretation), and come to some statement of probability about the likelihood of the premises having been falsified by the outcome of some set of experiments in some set of contexts.
This is the scientific process, and using it we build confidence about using sets of models in sets of contexts.
Different sets of premises seem to have different reliability at different scales of reality.
At the scale normally available to human perceptions (collections of more than 10^15 atoms, existing for more than 10^-2 seconds) then most things seem to follow causal rules most of the time (to very high degrees of accuracy).
At the scale of the very small (smaller than an atom, and times less than 10^-40 of a second) a different set of rules seem to apply, that are far more probabilistic, and involve a sort of fundamental balance between order and chaos in terms of pairs of properties. The logics that apply to this quantum realm appear to be quite different to the logic of ordinary experience, and take quite some time to gain any sort of intuitive familiarity.
When one takes the further steps following the likes of Wolfram or Rachel Garden into non-bivalent logics and beyond, things can get quite messy, and even the meta notion of falsity can become blurred, leaving only degrees of confidence.
So it seems clear to me that truth and falsity are simple notions that one needs to learn, and use as tools and ladders, and as with most things, it doesn’t pay to get too attached to any particular tool. The old saying – when the only tool you have is a hammer, every problem starts to look like a nail – has some “truth” to it ;).
It is much easier to disprove a claim by finding a single exception that to try to enumerate an infinity to prove something.
That seems to be it in a nutshell.
The more often you fail to find exceptions where you think them likely or possible, the greater the degree of confidence one can have in using a particular model in a set of contexts.
And there are traps in that.
It is clear that reality is so complex that we all have to use heuristics to make any sort of sense of it. Thus we are recursively subject to heuristic blindness.
Adding to that, in terms of survival, and the evolutionary utility of “truth*” what we need to survive is things that have a reasonable probability of being useful to us. With our limited memories, there is no point cluttering them up with stuff that doesn’t work. Focusing on what works is strongly selected for.
At the same time there is even stronger selection for the accurate identification of existential danger. Its worth forgoing a few benefits if it reduces existential level risk.
Hence – we have many levels of impulse to focus on “truth*”.
Your candidate answer comes close, and it makes a set of assumptions in doing so, one of which is the notion of “TRUTH”.
I understand the classical history of the notion of “Truth”.
I find the notion doesn’t stand up well in reality, when one looks at the evolutionary history of us, our thinking machines (brains and their sensory systems), and our culturally derived operating systems.
In the paradigm I am now using the very idea of “TRUTH” appears to be a simple approximation to something in almost every non-trivial case.
Try thinking of it this way, for a hint of what I am pointing at.
If one considers the set of possible Turing machines, it seems to be infinite.
Each one can in theory solve any solvable problem.
However, each one will, when instantiated, be in a particular physical environment, and have impacts on that environment, and be impacted upon by that environment.
So while one can postulate a theory about outcomes being identical, in reality they never are (time to solution and energy consumption to solution are often extremely important in evolutionary contexts as a couple of examples).
What might the term *PURE* logic mean?
Does it mean the simplest of possible logical systems?
Does it mean the totality of all possible logics (including fuzzy and non-determinant logics)?
Sure – there is a sense in which the structure you propose does map well onto the simplest of possible logics, and it doesn’t necessarily map well onto more complex logical forms.
The physicality of brains does influence how we think.
Our genetic and cultural histories are major influences.
One can postulate that they aren’t there, but even making such a postulate is (in a deeper sense) clear evidence that they are there.
I am stating quite categorically that the physicality of the hardware does influence the kinds of abstracts one is likely to *see*.
I am claiming that there is no *PURE* out, that reality has impacts, we appear to be real, and that reality matters.
It is the term *PURE* that I have serious objection to.
*Contextually sensitive useful approximation* I can live with.
And as a special case of n=1 in the infinite set of possible logics, yes I could accept it.
You are proposing truth values that can be only either 0 or 1.
That is a possible logic system. The simplest one possible (the classical one, the most trivial one in a sense).
The next simplest seems to be a trinary – with truth values 0, 1 or unknown.
Imagine a logic based on truth values that are probability distributions that may asymptotically approach (but never reach) 0 or 1, with any distribution being allowed within that set of constraints. In some interpretations that is what the evidence from QM seems to point to as being our “reality”.
Logic is not a singular entity.
The simplest of all possible logics may be a singular entity, it is not a conjecture I have seriously explored. Having written that I have an intuition that it may be false – but I have no interest in doing the work to formally show that.
Thus the postulate that “an argument is valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false” is sensible in the case of logical system n=1, but not necessarily sensible in any of the higher order logics. One has to get comfortable fundamental uncertainty if one is exploring the space of all possible logics.
n=1 logic (the simplest of possible logics) certainly has its uses.
It is a very powerful tool.
It is not the only tool available.
It is a mistake to think that it necessarily applies to anything, except the abstract system itself.
And it can deliver useful approximations to many real world situations.
Just as hydrogen, the simplest possible atom, is the most common, so the simplest of possible logics is also common, but not necessarily universal (the universe does not consist only of hydrogen).
The idea of validity is a bit slippery.
In one sense it is axiomatic to the classical system.
In another sense, it doesn’t necessarily apply to any other logic in that simple form.
In other logics, the closest one can get is a form of consistency.
And as Sigurd says – its complicated.