[ 31/May/22 ]
That whole set of premises is just wrong.
Churchill’s essay is founded on false premises – but his understanding of logic was so poor that he failed to see it – blinded by accepted truths that are in fact false.
The four FLI threats don’t rate in my understanding.
The fundamental issue is systemic.
It is the tendency to over simplify the irreducibly complex; and to develop levels of certainty that are not warranted as a result.
That has been a recursive aspect of social evolution.
The idea that evolution is based on competition is probably the most critical error in our current reality; as it has given rise to the dominance of competitive markets.
The reality is deeply more complex and almost exactly the opposite of accepted dogma.
Every level of complexity is based in and sustained by cooperation. There is no logical escape from that, not at any level or class of logic. If someone cannot appreciate that fact of existence, then it is because some aspect of their neural network has been captured by some level of dogma that they would rather defend than consider the possibility of being in error. Unfortunately, evolution has a strong tendency to select for such behaviour particularly in high stress situations (all levels of systems).
Some realities must be accepted.
One reality most do not want to consider, is that the very idea of money is a myth based in trust. Destroy the trust, and the myth evaporates – instantly.
Money has been a very useful myth, but advanced automation does change and has fundamentally changed the systemic dynamics of the very complex systems present.
Each of the ideas in this post pose far greater risk than the FLI scenarios if they are not acknowledged and accepted and dealt with appropriately, strategically, universally.
We really are running out of time.
We need to transition to the next level of technology, but any attempt to do so without first establishing effective cooperation will necessarily self terminate.
We are between that metaphorical rock and a hard place.
Accept the fundamental need for cooperation and responsibility, in the face of uncertainty, eternally – or we end.
We can have our competitive games, as many as we want, but only if they are built on a fundamentally cooperative base that actually delivers reasonable levels of security and freedom to all classes and levels of agents. And every level of freedom claimed necessarily comes with new sets and classes of responsibilities if it is to be survivable.
Hegemony is not an option.
Hegemony and freedom are polar opposites, at any level of logic.
If any agent values existence and freedom, then it is a logical requirement in every class of logic that they accept and respect any diversity that is not an actual unreasonable threat to their existence. That is going to be a deeply difficult concept for many to accept, as our neural networks are so heavily biased to prefer simple certainty over complex uncertainty. But that does seem to be our reality, beyond any shadow or remaining reasonable doubt.
[followed by 1 June]
Money, when used strictly as a trusted token of exchange, is a very powerful and useful myth.
However, today something less that 5% of the money created and used is used in respect of real goods and services – most is moving in internal money creation loops.
We have huge problems with how the current money system works. It has an embedded growth obligation, and we are hitting some limits on that. It has incentives to externalize as many “costs” as possible, and that is having impacts on critical systems most people have little or no awareness of. It no longer has any real relationship to actual goods or services available or sustainable.
We need money as a tool, in the exchange sense you mentioned, but the idea of having it as a universal measure of value in a planning sense is no longer survivable. Advanced automation has fundamentally altered the dynamics that (for the last few hundred years) have been a reasonable approximation to optimal.
The idea that most people have nothing, and must use their time/labour to survive (effective slavery to the system), and some few who got lucky at the start use the self generating characteristics of the money system to gather an ever greater share of everything, is no longer sustainable (if it ever was). We can now use advanced automation to ensure that everyone has sufficient to survive with reasonable degrees of freedom. That needs to be the starting point, not zero. Homelessness, threat of starvation, sickness, basic ignorance – those all need to become things of history. Advanced automation allows that to happen with little or no human input (once the systems are created and put in place).
There are a large catalogue of real threats that the current economic system is making us more vulnerable to, not less. We are losing resilience, rapidly (if you take a really big picture view). The existing system is very close to too many boundaries that can tip into chaos, leading to the deaths of a large fraction of humanity. There are solutions available, but they can only be safely deployed if we already have reliable cooperation, with appropriate sets of strategies and systems in place to ensure ongoing cooperation. The existing sets are not sufficient.
I am still reasonably optimistic (60/40) that the future can be much better than most people imagine possible, and I am also clear that there is a very significant probability of total system failure, and a small but significant risk of total extinction of our species. There are not any business as usual paths with any significant probability of survival. And we are embodied and embedded in very complex systems, and by definition they are not predictable in any sense other than probabilistic.
And to be explicitly clear, I am not supporting any sort of central control. I am explicitly clear that we need decentralized systems with multiple levels of redundancy and diversity if we are to have any significant probability of long term survival (think “search” in the face of fundamental uncertainty demands it). These are very complex systems, and the levels of complexity that we are now exploring demand them of us if we are to survive. I got into this “space” initially from an evolutionary biology perspective (the biochemistry and systems of life have fascinated me for over 50 years); but now look at the entire systemic structure more from a complex systems perspective.
[followed by 1 June]
We do seem to fundamentally disagree here; and I think the evidence clearly supports my conjecture.
If you examine the internal incentive structures of the money systems (independent of any political or cultural context) then there is definitely a recursive set of incentives to externalize costs to maximize profits. That is very real.
I agree with you in the limited sense of my previous post, of money being used purely and only as a source of intermediate value, it is useful; but as soon as attempts are made to use that value metric as a planning metric; then systemic failure risk increases. That is because “value in exchange” necessarily devalues any universal abundance to zero. This delivers 3 different major classes of failure modalities:
1/ Things that are universally abundant do not rate in the valuation scheme, and their absolute quantities can start to degrade at rates that do not allow sufficient time for recovery between when they cross the threshold from abundant to scarce (and thus generate a market value) and their scarcity becoming so low as to cause total system failure.
2/ There are also sets of incentives for agents to move things from abundant to scarce so that they can be used to generate profit.
3/ Markets work well when things are genuinely scarce, and the market incentives are to increase abundance of those scarce items; but there is a failure region as those items approach universal abundance. This results in a form of poverty at the margins that cannot be solved from internal market incentives alone.
Universal Income is a fundamental change to the existing money system. So perhaps we are just looking at and categorizing things differently.
I do not see Universal Income as a stable long term solution, and it might buy us a few decades to develop the deeply complex systems that seem to be required for actual long term stability. And for close to 50 years my strategic objective has been to find a strategic structure that optimizes both individual security and individual freedom over the longest timescale possible; and I am sufficiently familiar with the dynamics of evolving complex systems to understand that there must be degrees of eternal uncertainty in such things, and I have been recursively searching strategy space for ways to minimize those uncertainties as much as they reasonably can be.
If there is a simple message to come out of that; it has two fundamental themes that seem applicable in every form of logic I have explored:
1 Systems must have a cooperative base to maintain levels of complexity.
2 Any form of freedom that is not accompanied by an appropriate level of responsibility necessarily self terminates (eventually).
[followed by 1 June – separate sub thread]
No, that is not the message I intended to covey.
The four FLI existential threats are real threats with significant probabilities, but they do not seem to me to be the most significant threats, and framing them in the way that they are framed does not seem to deliver survivable solutions.
I agree that complexity in and of itself is not desirable.
Ockham’s Razor is a very powerful tool, and in the deeper sense of “search” across the space of all possible strategies is one necessary part of solving for an infinite class of “halting problems”.
But on the other side of that, when one is actually dealing with a deep stack of very complex systems, oversimplification of any aspect of that “stack” can lead to systems failure also.
Very few people seem to have much idea at all just how complex human life is. In the 60s I was getting 100% in maths tests mostly. In 1973 as a 17 year old I was given direct access to second year biochemistry, and I found it fascinating (and I did well at it, even though I was 2 years younger than most others in the class). I have been mostly self taught since then, and have run my own software business for the last 36 years. A few years ago a psychologist “diagnosed” me as “high functioning autistic”. A bit late, but it does help in understanding some of the dynamics of my life. I am different to most people, quite seriously so, and I am definitely human – but I understand the major systemic dynamics of most of those systems that make us what we are in ways that very few have ever considered.
For 40 years I have dedicated a significant fraction of my time to searching for survivable paths through the very complex strategic spaces we find ourselves in. And while I acknowledge Ockham’s Razor, and use it at all times, it does actually require that we use the minimum level of complexity applicable (and no less, nor any more) – and we really are very complex.