Oligarchies win except when society enacts effective reforms
“The collapse of urban cultures is an event much more frequent than most observers realize. Often, collapse is well underway before societal elites become aware of it, leading to scenes of leaders responding retroactively and ineffectively as their society collapses around them.” – Sander Vander Leeuw, Archaeologist, 1997
The article gives some useful insights in some contexts, and it also displays some major shortcomings in the depth of the strategic complexity displayed.
The complete picture is much more complex, and certainly the picture painted is one important aspect (one amongst very many, dozens that are equally as important, and a few that are much more important).
This site is supposed to be about evolution.
Evolution is about survival of replicators in strategic contexts.
Even if one accepts a fully causal reality (as Wolfram does) Wolfram demonstrates that such a reality attains many aspects of maximal computational complexity, and becomes unpredictable (though still causal).
I strongly suspect that the fundamental levels of reality are actually stochastic, and simply deliver a close approximation to causal at the levels we are able to observe, which delivers a very different sort of reality, where the possibility of real choice, real freedom exists (rather than Dennett’s hidden lottery form).
Getting back to evolution and replicators more directly, we have two major domains of replications that most evolutionists are now aware of, genetic and mimetic. It seems that there may in fact be an infinitely recursive set of such replicator spaces available at higher levels of abstraction, that are not memes as such, but exist in a different dimensional structure, that in our reality requires genes to deliver an environment where memes can flourish, and memes to deliver an environment for the new replicator. Leaving that thought hanging for the present, lets go back to what historically drives human evolution.
In a sense, evolution is about differential survival, and in another sense it is about efficiency of energy utilisation.
Hunter gatherers required about 1 million square meters per person of land area. The technology was rather inefficient at converting sunlight into human beings, and it did work after its fashion.
We have gone through many strategic and technological forms, with aspects of our technology on an exponential increase. Some very few people have had some awareness of this extremely complex set of environments and nested contexts of evolution.
We can currently develop systems that allow a reasonably high standard of living from under 1,000 square meters of sunlight (using efficient solar collectors and robotics).
Currently we have a technological form that is dominated by market exchange, rather than any sort of overall picture of efficient conversion of energy to human security and freedom.
Markets were an effective tool for coordination in an age of genuine scarcity of most resources (as noted by Smith and Hayek and others), but as technology has developed to the point of being able to deliver a rapidly exponentially expanding set of abundant goods and services, the scarcity based values of markets and exchange actually become the single greatest threat to the security of every one of us – even those oligarchs at the top.
Absolute security is a myth, and we can do a lot better than we are.
Absolute freedom is a myth, and again, we can delivery far greater practical sets of choices to everyone than are currently available. (One always has the freedom to end one’s existence in a sense, and that seems to be the most limited form of freedom. Freedom in any meaningful sense seems to require a reasonable probability of continued existence – and that technology now seems to be available.)
We either leave our scarcity based paradigm of money and markets behind, and adopt a paradigm that is based in universal abundance, or we have a very low probability of survival (as individuals or as a species) – that much is abundantly clear.
We have the technology to make distributed manufacturing, and distributed high fidelity trust networks a reality. These things do not require hierarchy or central control. Individuals use context sensitive heuristics to grant authority to those in their trust networks depending upon context, and these cascades of trust and information flow, can deliver very effective and efficient decision making.
Full automation of manufacturing and service delivery is the key. People can do any aspect of the process they want to, and if they don’t want to, then the automatics can take over and function at a useful level of efficiency (even if not quite so efficient as the best of humanity).
Information and technology universally available, through trust networks, in near real time (millisecond delays).
Technically, such systems are not difficult.
Socially, in a context of market based values, trying to create profit, they are impossible.
The issue of our age is not reforming markets or money.
The issue of our age is using distributed automation and communication to make markets and money a redundant paradigm, of historical interest only.
Elites tend to be conservative, in the sense that they became elite by being successful in the existing context. They tend to rely on things that have worked in the past.
As failures start to mount in complex systems it is always possible to make a reasonable case that it is some part of the complex system that is at fault, rather than the paradigmatic base of the system as a whole.
Yet the logic is clear.
The answers are the same. It works if one assumes causality (as per Aristotle, Wolfram et al) or one assumes stochasticism (as per Rumi, Heisenberg, myself et al) [Rachel Garden’s universal logic is an intermediary paradigm that also appears to deliver the same outcome].
I see no stable or safe way to continue using markets as a dominant paradigm of value, with the necessary consequence of continued poverty for the mass of humanity.
We really do seem to be approaching something of a binary – where it really is one of those very rare all or nothing situations at a major paradigm level – and not simply at a quantum or neuronal level.
If any of us want a reasonable probability of living a very long time with reasonable levels of security and freedom, then we must be able to deliver that to everyone – no exceptions. And there is a test of reasonableness in here.
Chang’s approach does not address the base issue I am pointing to.
He just develops a different instance of it in a very real sense.
In an age when most things were genuinely scarce, then it made very good sense to organise around exchange, and to incentivise the production of novel methods of production. As Hayek and others (arguably Marx) pointed out, markets thus served an information processing and coordination function.
The big issue, is that exchange values are predicated on scarcity.
When most things (goods or services) required human input, then there was a certain stability in the system.
We are now in an age of exponential expansion of computational ability (double exponential actually, currently on about a 10 month doubling time).
What that means, is that full automation of classes of goods and services is expanding faster than human beings can create new skills of value.
The “labour” side of the labour/capital mix is becoming more and more marginal.
So we have this really perverse outcome, in that the more we automate production of goods and services (which should make them easily available to all) the less ability there is for those at the bottom end of the economic distribution curve to access them.
We have the potential for universal abundance of a large and exponentially expanding set of goods and services, but the current exchange based (scarcity based) paradigm of markets and money cannot give a positive value to any universal abundance (by definition it has no exchange value – it is universally abundant).
Currently people are keeping the system operating by creating artificial legal barriers to such abundance, in terms of concepts like intellectual property, and “health and safety legislation”, which are both actually practical mechanisms for creating scarcity (preventing universal abundance) and thereby maintaining market value for a set of goods and services.
This has worked for a while, and the spin doctors have been able to convince themselves and others of the merits of what they are doing, but it actually isn’t in anyone’s long term interests.
There is a large set of very real low probability high impact events that are a very real threat to everyone.
Mitigation measures for those threats require massive redundancy, and widely distributed and highly interconnected networks (at all levels).
Markets cannot deliver that long term.
We need to go post scarcity.
Fully automated and fully distributed production and information.
Distributed trust networks.
Human nature is very complex, very context dependent.
We can be very cooperative and very competitive.
Context is king.
Our current market systems force most people into competitive modes, which is not stable for anyone long term.
Most people don’t actually want or need a lot of resources (some do, certainly, but not many).
It is easily possible to deliver to every person a set of goods and services (fully automated) that they would consider fair and reasonable and would not have an issue that some others had much more than that – the freedom and security they experienced from the set of resources available to them would be sufficient context to maintain a cooperative nature for the vast majority of humanity.
In an age where indefinite life extension is becoming a very real possibility, then all individuals interested in such things need to be thinking about all the many other risk factors to survival – most of which come from strategic interactions within human populations (or the extended set of mimetic and higher order entities within those populations). Money and markets become the major strategic threat in such an environment.
In the name of personal security, we need to change to an abundance based paradigm. There is an infinite set to choose from, any of which will work. We just cannot any longer afford the risks associated with a scarcity based exchange paradigm (nor do we need to).
In a very real sense, it is only a sort of “cultural drag” (social inertia) on the conceptual sets of understandings available that hold us in our current pattern.
Lubricating paradigms are available.
This is really complex, “tricky” to use a New Zealand colloquial term that may not quite carry the same connotations in other cultures.
Firstly – agree completely that much of the “code” of both our genetics and our culture and our higher order (post mimetic) evolution is made redundant by the exponential rate of change we find ourselves in (recurse to infinity).
The really tricky bit, is finding effective ways of judging just what bits of the coding milieu are no longer as utilitarian as they once were, and in what contexts.
I’ll try and point to an example by a practical analogy from my life.
I grew up on farms. By four years old I had already worked out how to make herds of animals go where I wanted them to. I could already see that moving a herd meant maintaining a balance between inquisitiveness and caution, and not going over the border into terror (which would cause the herd to fracture). I could already see that the balance of transition between states was different for different individuals within the herd, so the balancing act of how close to the herd I was, and how threatening I was, varied with which individuals were closest to me, and with the general threat level of the context (the closer to buildings the more nervous the individuals, the greater spacing required to maintain balance, etc).
By four years old I knew how to do that.
I didn’t understand it in the terms I just outlined, but I could do it in practice.
The more I did it, the more I studied psychology, cybernetics, behaviour, zoology, games theory, etc; the greater the depth of my understanding, and the more utilitarian and generalised across domain spaces generally became my theoretical understanding, and my neural network was configured for that sort of thing at a very early age.
It is a little over 42 years ago since as an undergrad studying biochemistry it became clear to me that all cellular life alive today is part of a continuum of cellular life that is some 4 billion years old. In this sense, indefinite cellular life is the default condition for cells, and age related senescence is something that certain lines of life have evolved for certain lines of cells – ie aging is a genetically controlled trick that allows for rapid evolution of multicellular complexity. That being the case, at some point we would figure out the exact genetic mechanism and alter it. Indefinite life extension was a real possibility.
So, given that as a starting position, and being clear that plenty of people were interested in that, and it would most likely happen in my lifetime, I settled into the really difficult question that results from such a realisation.
Given that indefinite life extension is possible, what sort of social, political and technical institutions (strategies in the deepest sense of strategies, code in the sense you seem to be using in the article you refer to) are required to give potentially long lived individuals a reasonable chance of living a very long time?
For 17 years I worked as a commercial fisherman, which is a job that occupies the body and leaves the mind very free. It is also a job that is in an ever changing environment (the ocean with its waves, tides, currents, life forms, etc) – much more chaotic than any human context. I read, I contemplated.
I built my first computer from components.
Then I bought an assembled one, started programming in earnest. Formed a computer club, met lots of interesting people.
Eventually sold my fishing interested and started a software company.
For the last 30 years I have run that software company, and every day that 42 year old question has been with me.
About half my time I spend exploring new concepts, new ideas, developing new competencies, refining old ones.
In 1978 I discovered Richard Dawkins’ Selfish Gene, and it transformed the way I thought about culture and individuals. I got the way evolution can recursively explore new domain spaces. I observed post mimetic evolution within my individual consciousness, within the replicator space of my individual neural network.
I explored domain spaces, looking for commonality, looking for fundamental principals, in as much as such things exist. Read a lot. Met a lot of people, many different groups. By the mid 80s I was active in over 30 formal groups and about a dozen informal ones.
I finally understood the fundamental limitations of any and all exchange based paradigms. Money and markets are one specific instance of an exchange based paradigm, and none of them can consistently assign a positive value to universal abundance. They fundamentally impose scarcity as a limiting paradigm – kind of like the implicit box that stops most people solving the nine dots problem.
So in that context, in as much as the PI system is an exchange based paradigm, it does not transcend those limits.
And in another sense, I can see a certain utility in the paradigm.
There are many aspects, biological and cultural, to being human that no longer serve our best interests.
While acknowledging the realities of both biology and culture, I am essentially post both in a very real sense. I survived a terminal melanoma diagnosis by radical change of diet, and reprogramming my neural networks to like new things (years of intentionality without exceptions) – not easy, and doable.
And there are many aspects of both biology and culture that are very applicable to our current exponential context, and can be powerfully used.
Human neural nets are very powerful distinction systems, if they are given useful training, and they are prone to a vast array of errors (many explicitly developed in Yudkowski – Rationality A-Z).
So coming back to moving herds.
I see the use of tools, like David Snowden’s Sensemaker, and group decision tools like unu.ai as having great utility in the near term, particularly if they are generalised and universally available to all levels of human social interaction.
So in that sense, I like the dimensionality of Chang’s context, if it is used in the sort of context that Snowden would consider appropriate.
And in the wider (deeper) context of computation more generally (as in the sorts of complexity that Wolfram explores in NKS, even if he assumes causality, and I prefer to generalise even further to fundamentally stochastic systems which have the strictly causal as one limiting case of an infinite spectrum) it seems clear to me that we must accept the infinite diversity implicit in freedom, even as we accept that freedom cannot be absolute, and that life and liberty are necessarily the highest values of any sapient entity.
[And I’m not sure if that will clarify anything, and it is the best I can do at this time.]
This is a seriously yes and no sort of reply.
Constructor theory is kind of cute, in the same way that Kurt Goedel’s incompleteness theorems are cute.
Goedel holds a place of eminence in my experience, as being the only individual who’s work I have not been able to falsify in some significant aspect, and I am also clear why he holds that spot. He did it by staying strictly in the realm of logic, and making no assertions at all about reality.
Constructor theory fails that test immediately, and it is cute – has a certain intellectual symmetry and elegance to it, but not enough to significantly alter my understanding of reality.
To understand how that might be so, I need to explore your second idea, that Wilson somehow exploded Dawkins. To me that division was more of a burning of a straw effigy, rather than anything of substance.
The key thing I got from reading Dawkins was the recursive nature of evolution.
In the domain of genetic replicators, it was there in RNA working with Ribosomes (RNA constructs) to give proteins, and thence opening a whole new realm of catalytic constructors, amplifiers, gates and logical systems (first major recursion). Then on to the level of prokaryotic cells, then eukaryotes, then to multicellular forms, to multiorganed organisms, to wider groupings or social organisation etc. As a biochemist the precise mechanisms for how that was done fascinated me. The physical mechanisms, and the state spaces that delivered for evolution to work in – deeply profoundly beautiful. The simultaneous operation of all the different levels.
Once brains evolved, then the second major domain step could occur (this was explicitly explored in chapter 11 Memes the new replicator). In one sense Dawkins can be argued as saying that it was a new domain, and in another sense, that it was a step on a potentially infinite recursive path. It was that latter idea that immediately captured my intuitive attention. Having been using the tools of mathematical induction in all spheres of enquiry for over a decade at that point, I applied them here. I had good evidence for cases n=1 (Darwin, Watson-Crick, Dawkins et al) and n=2 (memes, Axelrod, Maynard-Smith, Dawkins et al – and later von Neuman, Babbage, Lovelace, Turing, Wolfram et al), could I find an n=3. Yes I could, me (and I suspect many others, and I had evidence for my own specific case). I looked very deeply within, applied as many tests as I could find, yes – n=3 seemed probable. If I assumed n=k, could I demonstrate proof of n=k’. After a few weeks of exploration, that did in fact seem probable, but I didn’t have the time or interest to develop it formally (my chosen interest lay elsewhere, as already stated).
So I have never seen any fundamental difference between EO Wilson and Dawkins. They are both just talking about instances of levels of complexity within early level domains of evolution in a sense. They seem mostly just to be talking straight past each other, using different sets of assumptions and not taking sufficient time to generalise the assumption sets to the point that the difference becomes obvious. Having met Richard, and having read enough of Wilson, I can see how that might be so. I’ve spent enough time in Mensa groups to see that in operation at many different levels – one of those aspects of being human that recurses all too easily.
Which gets us to the really interesting bit, what is reality?
And that is where I seem to diverge from most.
It seems very clear to me that reality is a something, and that it very closely approximates a causal domain space at some levels, yet at other levels it seems to display stochastic properties (Heisenberg et al) [Kind of the flip side of Descarte’s “cogito ergo sum”, in a very real sense].
How could that be?
Well actually that is quite easy, once you start to get a grip on probability and collections of stochastic systems. Individually stochastic systems are by definition unpredictable, but grouped together they become more and more predictable.
When you consider that the smallest thing a human eye can resolve is a collection of some 10^18 of these stochastic systems existing for some 10^40 of their time units (state spaces), then it is not at all surprising that our world seems to follow hard causal rules, even quite broad probability distributions can deliver very reliable outcomes over that sort of state space.
So while I very much enjoyed Wolfram’s NKS, in a sense, I do not at all agree with his stated assumption that reality is founded on hard causality. It is not required. At our current level of technology such an assertion is neither provable nor disprovable. (And I do get how Rule 30 can deliver probabilistic like outcomes, and it can also work back the other way – so it doesn’t actually prove anything in that sense. I reject the notion implicit in hard causality that my existence is an illusion, and claim my choice in a fundamentally Bayesian existence.)
And don’t get me started on the many forms of QM experimental inference, none of them do a reasonable job of exploring the state space of assumption sets possible – so not worth worrying about at this juncture.
It seems to me far more elegant that reality has the entire domain of stochastic systems available – from the simple binary of true-false, to the far more esoteric systems of probability that probability theorists from Bayes onward have loved to explore.
So in this sense, the idea that constructor theory can have anything certain to say about the nature of reality seems to me as ignorant and hubristic as Plato (and acknowledging all the elegance that both have bought to the realm of human thought), and certainly, there will be influences, at different levels – it does seem to be a useful tool in some domains – but not at any sort of fundamental level – reality seems to be somewhat more creative than that.
One of the key things to get about being human, is that as conscious entities, we have no direct access to reality. All of our experience is of a subconsciously generated model of reality, that is predictive in nature, by some contextually variable number of milliseconds, which leads to all sorts of misunderstandings about the outcomes of certain psychological experiments on the nature of choice and determinism in the human mind as one notable example.
So here we are.
Experiential entities experiencing our own personal versions of reality, loosely connect it seems to some actual reality, whatever that actually is.
And we are so constrained by the things we accept (mostly unconsciously) at every level of awareness (which seem potentially infinite).
Infinite stacks of infinities – any and all of which hold interesting paths for one to explore – in our eternal ignorance.
What an amazing existence!!!
What a serious and debilitating drag, limitation and existential risk, is the currently dominant notion of exchange values!