[ 14/March/22 – Richard shared a TikTok post blaming Jacinda for everything]
If you really are interested in some of the sorts of strategies that are being played out across the planet – to which we and almost everyone else are pawns (Jacinda included) – then listen to this interview:
And (of course) I have a contrary view about the issue being irresolvable (at 17:30) though resolution is deeply complex, and cannot be done within the existing strategic dogma (I agree with him that far).
And I agree with a lot of what Samo says – and as I see it, it is deeply more complex than even he has alluded to.
And it is still a fact that New Zealand has one of the lowest debt to GDP ratios in the developed world.
Sorry about that. Perhaps this will be a little less opaque.
I was trying to point to a set of dangers in oversimplifying the understanding of the games being played.
Where Samo talks about the state craft needed around 45:20 and at 46:58 about the values of the west without the empire of the west – we completely align. His warnings about what the west has actually been doing in practice at around 42:35 are entirely appropriate.
Now consider the strategic context that Samo suggested around 27:40 – of Putin and his immediate context of the power dynamics he exists in locally.
Now consider all of that in the context of the history and emergence of strategy, and in the context of the evolution of neural networks (our brains, and those of AIs – and the differences between them). The literature on strategy is vast, and I have read only a tiny subset of it, but I am an evolutionary biologist by training and interest – so the strategies of living systems have been an interest for over 50 years.
We are all born with multiple levels of bias in our neural networks. To a degree those are necessary for us to be able to make any sense of the complexity within which we find ourselves embodied and embedded in any usefully short time; and all such things come with constraints and costs in a sense.
I am autistic spectrum. What that seems to mean is that my neural networks lack some of the constraints normally present in humans. On the downside, it means I find the behaviour of most human beings unintelligible much of the time- I have no real idea why they do what they do (it applies as much to my dearly beloved wife as most other people). On the up side, I can see enough details that seem to be subconsciously simplified in most people, that I can develop abstractions and views of patterns in reality that most cannot see. The problem with autism seems to be that we tend to get overloaded with data, and we need to develop levels of mechanisms that allow us to cope with that. I have mine – I am weird.
One of the things I have noticed is that evolution seems to have (understandably) biased our neural networks to preferentially notice threat over opportunity. From a strategic sense, that is understandable, it is much more important to avoid all lethal threats than it is to take advantage of all possible opportunities. The deep strategic issue with this, is that it tends to spiral into self sustaining systems of paranoia, and delivers the sorts of behaviour we see at multiple levels today.
Mutually Assured Destruction is not a viable long term strategy, because it actually relies on both sides actually being mad enough to do it – and that level of insanity is deeply destructive at multiple levels. It is a result of over simplifying the irreducibly complex.
In simplifying strategy down to matrices of zero sum games, von Neumann and Games Theory generally have left us a legacy of death spirals. They abound in all domains, military strategy, financial strategy, economic strategy, political strategy. All essentially resulting from a strong tendency to over simplify and the multiple levels of confirmation bias that result from that. Without a very broad and deep understanding of evolutionary biology, the tendency of any strategic game must be some abstracted form of death by paranoid spiral.
As far as my inquiries of this realm of indefinite abstraction of strategy across all possible strategy spaces has taken me in the last 50+ years, it seems that the only sets of strategies with any significant long term survival prospects are those which acknowledge the evident biological reality of the fundamental role of cooperation in the emergence and survival of any and all levels of complexity, and in their ongoing survival. And at every level that means evolving ecosystems of cheat detection and mitigation strategies. At higher levels that shows up as various forms of morality (Nietzsche’s error in calling this “slave morality” was profound, and understandable in his time).
It is always and necessarily far easier to destroy than to create – that is a simple fact of thermodynamics. As the lyric from Hamilton goes – dying is easy, young man, living is harder.
From my explorations there is a simple (ish) fact that emerges, when considering intelligent technological agents, the only significant probability of long term survival for any class or instance of agent is when there is cooperation between all classes and levels of agents that respects the lives and freedoms of all. And freedom in this sense always and necessarily comes with limits and responsibilities, if it is not to self destruct. Put even more simply, no form of all out competitive game is survivable in the presence of intelligence and tool use. Competition is only survivable if it is built on a cooperative base that respects the life and liberty of all instances and classes of agents that are not an unreasonable and actual threat. The very idea of hegemony, at any level, is antithetical to freedom – by definition. And freedom must necessarily be constrained to respect the lives of others, and to cooperate where needed for survival. If that is the base, then survivable competitive games can be built upon it.
The current notion of nation states playing MAD power games is not survivable.
The current competitive economic system is not survivable.
The current simplistic political systems are not survivable.
The evidence is beyond any shadow of reasonable doubt, but the evidence sets are vast – and few have the time or interest or abilities to look.
We need advanced technology, to counter a large collection of existential levels risks, if we are to avoid the fate of the dinosaurs. So it is competition that must be constrained within survivable limits. This strategic context seems to me to be the most probable candidate for the “Great Filter”.
It seems very probable that we do not have too many months left to achieve widespread recognition as such if we are to avoid it. How “wide spread” is a deeply interesting question.