Great statement of the problem Bret.
And for me, like you, it is a profoundly deep issue.
And I am clear, beyond any shadow of reasonable doubt, that the evolution of complexity is based firmly in cooperative strategies, and that all cooperative strategies instantiate a new level of “group selection”.
And the boundaries that define “group” also strongly influence the relative influence of the selection pressures of any form of “group”.
And it is really complex.
It seems that the emergence of any new level of such cooperation requires a context in which the threat to “individuals” from other “individuals” is overwhelmed by the threats from “external factors” and that there are “cooperative strategies” that can mitigate the impacts of those “external risks” to some significant degree.
There immediately exists, as you accurately observe, the “free-rider problem”, which demands the rapid emergence of a set of free-rider detection and countering strategies, and that then becomes an evolutionary ecosystem in and of itself (every level).
And when one looks closely at biology, we see many groups.
We see groups of genes in chromosomes.
We see groups of chromosomes in cells.
We see groups of cells form bodies.
We see groups of bodies form populations.
And many more far more subtle.
In the mimetic realm it gets far more interesting.
Elinor Ostrom’s work disprove’s Harden’s hypothesis as a general condition, and makes it context specific.
And recognising the contexts is important.
Thus, I am now clear that one of the greatest errors of biology is characterising evolution as “Nature red in tooth and claw”. While such a competitive view of evolution is certainly a very real aspect of many contexts, in terms of complex entities, cooperative strategies are always important. Every new level of complexity demands a new level of cooperation, with all the necessary requirements for sets of stabilising strategies, and all the evolutionary complexity involved in every one of those.
Competition always drives systems to some set of minima on the available complexity landscape, that is unavoidable in a purely competitive environment.
Thus, for extremely complex entities like ourselves, it is true as a first order approximation that our existence is predicated on cooperation, and evolution at our level needs to be fundamentally cooperative for us to survive.
Formulating evolution as competition is essential as a starting point, and if not balanced by the cooperative aspect leads to existential level risk in higher levels of systems.
When you look at human existence in this context, our upper levels of abstract thought are predicated on some 20 levels of complex adaptive cooperative systems, with every level having its suites of stabilising strategies that do, in practice (on average over time), act to detect and deter invasive “cheating” strategies.
That is not a simple view.
Dawkin’s Selfish Gene remains the only book I have read cover to cover twice in a 24 hour period (in 1978). I found it that powerful, that explanatory, particularly the latter chapters on the emergence of complex cooperative systems.
And part of understanding this is understanding “understanding” itself.
It is getting that what we experience as reality isn’t, cannot be.
Modern physics is clear, that reality is complex beyond our computational capacities (beyond any computational capacities, and if one takes QM at face value, containing fundamental uncertainty – a fundamental balance between order and randomness), and thus our subconscious systems must make models based upon heuristics that worked in the past at least well enough to allow our ancestors to survive.
So we experience a model of reality.
Then we make our models of that model.
So we tend to interpret things in terms of simple models – we have no other logical option.
Does that mean that reality is such simple models?
No – doesn’t mean that.
It only requires that reality approximate such models in some set of contexts at least well enough and frequently enough to confer some selective advantage.
So that leaves us with evolutionary epistemology, profound uncertainty at all levels; exposing classical ideas of “Truth” as simplistic illusions, and logic as only necessarily being a useful approximation in some contexts.
Mathematics and logic are the best modelling tools we have.
They allow us to build great models.
Some of those models are exceptionally reliable and useful in some contexts.
Does that mean that “Reality” obeys logical rules?
That is not required.
All that is required is that whatever rules or lack thereof apply to reality, that our models of them can be sufficiently useful to give us better than random survival probabilities.
I noted in this short video you used the idea of evolution having purpose 3:58 “her advancing her genetic interest”.
I get that is a useful mental shortcut, but it need not be what is present – and I don’t think either you or I actually think that, but others hearing you might.
Evolution just selects systems from among the variants instantiated.
The classes of mechanisms that instantiate variation in any particular system can be fascinating and profoundly subtle and complex.
And in this context some ideas from database theory, and computational theory are important:
From database theory, the idea that for a fully loaded processor, the most efficient possible search of any dataset is the fully random search. That then instantiates the problem of how one approximates randomness, with all the biases (necessarily) instantiated in the human brain.
From computational theory, several different classes of ideas give the same meta problem. Maximal computational complexity, non-terminating computational problems, and others all create the halting problem, which leads to the need for “Oracles” – systems that deliver a random output in a survivable class in a short time.
We all contain many levels of such things.
To the degree that we become aware of them, we gain some degree of influence.
I love your work.
I am profoundly indebted to Richard Dawkins.
I acknowledge the existence and presence and importance of kin selection, and variants.
This is not an either or thing, it is a both and thing.
And even David Sloan Wilson doesn’t actually go nearly far enough into the complexity that is, to me, clearly present.
And the outcome of all that is simple in a sense.
To a good useful first order approximation, human beings are cooperative entities (all levels), and evolution at our level of complexity demands cooperation, and purely competitive modalities always instantiate existential level risk to the higher orders of complexity present (in the way they drive systems to sets of minima on the complexity “landscape”, and those may be too low to sustain higher order function).
Bret’s discussion of the debate – Rebel Wisdom – strongly recommend watching – strongly align. Disagree with the notion of purpose, and it is a useful mental shortcut.
@rs5352 That seems to me to be one of what appears to be an infinite class of possible strategies that work in some contexts.
Axlerod showed back in the 60s that the simplest class of strategies is the retaliator class – some variation of trust until trust is broken, then retaliate in a way that removes all benefit and a bit more, from the trust breaker. Elinor Ostrom and team catalogued a larger class of strategies in respect of management of commons resources.
For me, there is no challenge to the notion of group selection.
Genes exist in groups in chromosomes.
Chromosomes exist in groups in cells.
Cells exist in groups in bodies.
Group selection exists.
It is real.
And it is far deeper and more complex and more subtle than those 3 simple examples.
Sure, there is plenty of room to argue the degrees of influence in particular sets of contexts, but no room to deny the issue of influence at some level.
@rs5352 I found that lecture by Haidt almost terrifying.
He has got some useful approximations to some things, but almost everything is vastly more complex than his simple approximations and his projections are dangerously ill founded.
At 54:30 he states “I don’t know how to read that logarithmically”, which is equivalent to saying he has no useful ability to project past the near term (5 year horizon) as all the important trends are in fact logarithmic.
While Steve Pinker does capture some real aspects of the situation, Nassim Taleb has some valid criticisms.
Yes, how we think, and how we interact, are very important aspects of our being, and our future, and it is far more complex than “capitalism”.
Capitalism is important in several aspects:
It tended to aid in weakening traditional structures sufficiently for novelty to emerge (both a strength and a danger);
It tended to diversify networks (via multiple levels of market interaction and cooperation);
It improved productivity via supporting specialisation;
It sped up exploration and novelty (via variations on the same themes as above);
It distributed cognition, decision making, risk assessment and risk mitigation.
So those are important things, and they all involve computation when you analyse them, and mechanical computational ability has been on a steady double exponential for over 100 years.
That trend will completely override all of capitalism’s benefits (and leave only the dangers, which are many) by about 2032.
One cannot make sense of what is happening to us, or realistic assessments of threats or risk mitigation strategies, by using linear thinking. One must go exponential, so that one can not only read log graphs, but can create them in the imagination from looking at equations.
I am cautiously optimistic for our future, but only if we can get enough people thinking exponentially.