Jordan Hall – Neurohacker collective
Late comer but enjoyed the series largely up to this point, but this video seems to miss the major issue present.
Up until this point, the tools were replacing skill sets; so there was always the ability to innovate.
This time it is fundamentally different, because now the tools are themselves exploring the space of computation and innovation. The tools are now tools of computation, and they are being recursively applied, and thus their growth is on a double exponential.
Like all tools, this in itself is neutral, it is what we do with it that matters.
Adam Smith identified a long time ago that cooperative labour was far more productive than individual labour – using the example of the production of pins.
Our ability to automate any process has taken that to an entirely new level – as you noted in episode 2, the class of the “anti rivalrous” is growing exponentially – with profound implications.
We need this new level of automation, to allow us to fix all the relationships we have broken with the many non-scaling dependencies we have developed in our existing systems, due to many different sorts of founder effects, some of which you covered well in Tainter plus model (though our situation is far more dimensional than that model implies, it is definitely part of the picture).
Our biggest issue now is conceptual.
We are so used to thinking in terms of money as a useful measure of value, it is very hard for many to think beyond it.
We are also doing something that appears never before to have been done (at least not on this planet), which is to instantiate a new level of “coherence” (to use your term, but to me the term doesn’t map well to what is happening) from agents that are more or less self aware. We are doing so in a context where we are also instantiating agents that are profoundly more competent (though not yet as energy efficient) than we are.
We need the technology, to solve the many profound levels of problems we have created with the competitive market systems that have dominated our thinking for the last few hundred years.
But unleashing that technology to mindsets still conditioned and bounded by the implicit assumptions of market value is a systemic guarantee of existential level crisis.
This is seriously complex territory.
It is not like the simple strategic situation of a four layer model, but is much more like sets of strategic ecosystems.
There can be no guarantees in the face of such complexity and fundamental uncertainty; and there can be reasonable levels of confidence if we look deeply at the strategic systems in biology.
We now have the real ability to instantiate new levels of degrees of freedom.
The need for labour to maintain our systems is disappearing.
Individuals will be able to have real freedom, many for the first time in their lives, and that will come with responsibilities, and there will be limits (but they will be reasonably generous by most standards). And that will be profoundly unsettling for many individuals – it will be entirely novel, and novelty can be dangerous, and is always unsettling in many aspects.
So many of the ideas you promote here, about distributed governance and distributed agency in particular, are essential aspects of stable solutions; and it is very much deeper.
The idea of satisfiers is one aspect of the far more highly dimensional structure of valances generally, particularly in the context of the evolution of sets of context sensitive valences vs the space of the set of all possible valences that allow a reasonable probability of survival.
Part of that is the distinction that many of the sets of valences evolution has instantiated within us may not be particularly well suited to survival in our current reality, while others may be deeply relevant in ways that very few can yet begin to “see”.
Another part is seeing that the very idea of “Truth” is a simple approximation of something profoundly more complex that contains fundamental uncertainty at every level.
The idea of “Truth” must be relaxed, to something more closely approximating “contextually useful approximation” if we are to get any sort of agreement across systems that vary in complexity by orders of magnitude.
I align with many aspects of your approach, and have talked with Daniel a bit about some of these issues.
I agree this subject matter is important – and it is difficult to approach some of these ideas in ways that have any reasonable probability of being interpreted as anything like what is intended by anyone who has not had several years of interest in evolution, complex systems, coding, and AI.