[ 20/April/21 ]
The survival of complexity over deep time and across a wide variety of dynamic contexts.
If you do a deep mathematical analysis of survival strategies over such a “landscape” then it becomes clear that all new levels of complexity in evolved systems are built on a new level of cooperation, and that competition tends to remove complexity and drive systems to some set of minima on the available complexity landscape.
And that rapidly gets very complex, because at every level cooperation requires an evolving ecosystem of cheat detection and cheat removal systems.
At higher levels of complexity, our moral, financial and legal systems need to be a reasonable approximation to such an ecosystem of cheat detection and removal systems if we are to survive. At present all seem to have been invaded by multiple levels of essentially “cheating” strategies. This is not a good thing in terms of long term survival.
And the systems that we are, and within which we exist, are sufficiently complex that there will eternally be multiple levels of uncertainties present in every level of such determinations; and it is the responsibility of each and every one of us to make our best efforts to make such determinations as we reasonably can, to the best of our limited and fallible abilities.
Every level is potentially infinitely complex, and there appears to be no limit to the number of levels potentially present. We seem to be well into double digits at present, and growing.
Any moral system that does not pass this test in the long term will go extinct.
It seems very probable that many have already.
The concern at present is that some of the “cheating” variants present today have the potential to threaten all other variants (all levels).
If one does a sufficiently deep analysis of the ideas of both freedom and security, then it is clear that both are optimized within cooperative contexts. And cooperation is a very different thing from control (though they may look superficially similar from some levels of analysis).
One of the eternal issues with open systems is that there are always multiple levels of boundaries to knowledge and exploration that some will wish to explore beyond. By definition what lies beyond is unknown, and could be either or both of beneficial or threatening. Ignorance may be bliss, but it is not security. Security demands exploration, and it is not without risk.
It seems likely to be eternally true that “the price of liberty is eternal vigilance” – in any and all dimensions one is able to explore.
The mathematics and logic is beyond any shadow of reasonable doubt, that in such an environment, fundamental cooperation that is alert for cheating is the only strategic option with any significant long term survival probability.
The dinosaurs seemed to settle into a competitive space that did not allow sufficient exploration of the very real (but low frequency) threats to existence. If we are interested in long term survival, we must have cooperation to maintain exploration eternally. It cannot be without risk, and the logic is clear that provided it is in a cooperative context, the risk profile can be minimized.
I am definitely human, and I have many uncommon attributes, like an ability to do complex math very quickly (from a very early age). I trained as a biochemist and evolutionary biologist about 50 years ago, and have operated a software business that I founded for the last 35 years. I tend to look at everything as systems with sets of incentives and constraints.
In this view, a human being capable of speech is a stack of complex adaptive systems at least 15 levels deep (some genetic/biological, some cultural/ethical/abstract).
In evolutionary terms, every level of complexity is built upon a level of cooperation, and naïve cooperation is always vulnerable to exploitation by strategies that “cheat” on that cooperative; so can only survive if accompanied by an evolving ecosystem of cheat identification and mitigation systems.
In this view, each level of morality is such an ecosystem of cheat detection and mitigation systems that is some approximation to optimal for that particular context of its development.
It is in this sense that all morality seems to ultimately be about the survival of some level of complexity over time. As yet, I know of nothing more complex than a human being, and some developments in AI systems are starting to rapidly close that gap.
I am an autistic spectrum geek who has pushed sets of abstractions past 12 levels on a few occasions. It is very difficult to explain a second level abstraction to another person, let alone anything more abstract. It would take me decades to go through some of the details even with specialist mathematicians, because I have “flown over” many of the conceptual systems that I have tested in other domains, without doing the step by step work to revalidate them.
So I fully acknowledge that there are gaps in what I wrote if one is looking for a step by step development. And what I tried to do is to point to the major relevant themes, and the major supporting conceptual systems, and the details would take me several lifetimes to write out (I can think far faster than I can write).
The test for any moral system is being able to identify agents that are cheating on the cooperative that makes that level of complexity possible and then to mitigate the effects of that cheating (which at higher levels usually involves creating contexts that return that agent to cooperative behaviour).
I do not see any beauty in death and disability. I see far more beauty and stability in having a reasonable probability of living with the long term consequences of one’s actions – that seems to be required for stability at multiple levels.
The sorts of behaviour that are actually survivable long term depend very much on the context. And that can become a very deep strategic conversation.
I agree that fear and hate are major issues for many people. Understanding the evolutionary strategic contexts that support the emergence of such behaviours allows us to actively design contexts to suppress the expression of such things and to promote the emergence of cooperative and creative behaviours more generally. If one starts to deeply explore the systemic nature of both freedom and responsibility then one can start to appreciate that if one wants reasonable degrees of security with reasonable degrees of freedom, then it demands of us that we are both cooperative and responsible (to the best of our limited and fallible abilities, whatever level they may be at in any particular context).
And certainly there is necessarily more unknown than known in any infinity for any finite entity. So we are all necessarily ignorant and mistaken, and the role of science is to become less wrong over time.
Errors and mistakes are necessarily part of the process.
A degree of acceptance and respect seems to be required for all individuals, all levels.
Having been deeply involved in the development of a legislative system through 2 acts of parliament (one in 1983 and another in 1996) and in the development of many aspects of the monitoring and compliance and enforcement systems over that time; I have no reasonable doubt remaining that closing loopholes in legal systems is an impossible task – there are too many levels of systems and complexity present.
I have thus decided that while a certain level of law is required as backstop, the law itself can never provide stability, there must always be a level of responsibility present in individuals that is greater than any law, if we are to actually survive the risks in the complexity present.
My current focus is trying to make as many people as possible aware of the fundamental role of cooperation in both the evolution of complexity and in all real expressions of liberty. And in doing that I am consistently clear that liberty without responsibility necessarily self terminates.
So the current dogma common in economic and libertarian schools of thought, that competitive markets are the friend of liberty, could not actually be further from the truth in contexts where fully automated systems are coming “on stream” (they are here now, en masse).
So I am definitely a fan of longevity, and of liberty, and I freely acknowledge that liberty without responsibility is self destructive.
Hence the focus on bringing people to an awareness of the need for responsibility – all levels.
The “paradox” that will appear to many who have not looked deeply enough at the systems, is that in really complex systems, hard constraints (like laws) tend to become brittle and fracture, thus breaking the system. The focus must go from following the letter of the law to following the intent of the law, to the best of our limited abilities (and if one is not clear about the intent, then following the letter of the law is the best option). It is in this sense that I focus on responsibility. Too many laws prevent people being responsible, because the probability of punishment becomes essentially random – as the perverse incentives in complex contexts multiple exponentially.
It is in this sense that I assert that it is not logically possible to close all loopholes in any legal system. Any of that set of classes of approaches necessarily fail – David Snowden gives some great insights into that line of thinking, even as Dave and I differ on some of the details.