[ 14/June/22 ]
I’ve had my share of arguments with Robin over the years, and I give him full respect for his knowledge of economics; but his knowledge of biology and particularly the strategic underpinnings of evolution is woefully inadequate and just simply wrong on multiple levels.
Life is complex – really complex.
Human life is the most complex and the most cooperative life on this planet (which also happens to mean, “that we know of at this time”).
While there are certainly many aspects of competition that are eternally part of any evolutionary system, in terms of the emergence and survival of new levels of complexity; it is true to say that their emergence is empowered by cooperation and their long term survival is predicated on the long term maintenance of that cooperation, and that demands an evolving ecosystem of cheat detection and mitigation strategies.
And in my understandings, one of the most powerful characterisations of life, is “search” for survivability across the space of possible systems and possible contexts. That leads to the most general case of life possible which is systems capable of real time adaptation to changes in contexts and recursive search through systemic and logical spaces for novel solutions to identified problems, and novel opportunities. And when one delves into the theory of search, the most efficient search possible for a fully loaded processor is the fully random search (which does lead to an interesting set of conjectures about how neural networks as necessarily biased as human neural networks may approximate random search in different classes of contexts, and how evolution may have embodied such things into our neurochemistry).
Unusually, I find myself getting really annoyed and frustrated by the multiple instances of over simplification of the truly complex leading to entirely inappropriate conclusions — at least in the first 80 minutes of the interview.
The latter part of the interview I find Robin is often at his superb best – but he still over simplifies the constraints required to get long term survivable outcomes from markets (particularly betting markets).
Agree with Robin that most ideas become more obvious over time, as information accumulates. So agree that Einstein deserves some celebration for doing it first, and others would have done it later if he had not. That is clearly evident when one views life as “search” (eternally).
Agree completely that the lesson of AI is the view-quake that perception is hard!!!
But the thing from biochemistry is that life is complex – deeply complex, and subtle.
The chunkyness of AI is defined by the biochemistry of the computational systems of brains. I developed one solution in 1974, but some things are too dangerous to release.
Around 3:17:20 Robin speaks about emulating the power of the cells of the brain – that is an inadequate model. What one needs to do is emulate the computational systems of the brain. Some of those are at the cellular level, some are at the synaptic level, and some are at the level of the protein structures within the synapse. Computation occurs at all of these levels (and at others, within the body, and the various “organs”). We are the embodied whole of that. Getting some feel for the computation possible in the quantum aspect of protein structures is fundamental to getting a reasonable handle on just how complex we are. I started from biochemistry in 1973, and the conceptual sets available from biochemistry have increased substantially in the intervening years. Search across the space of pattern through time (at scales from millisecond to 500ms) is where much of the action happens in human brains, and it is at the molecular level – and it is both subtle and powerful – and the search space coverable is vast – of the order of 10^50 patterns per second. And what we get to notice is the differences between expectation and delivery (at least at some scales, in some contexts – and the vast bulk of it is subconscious, necessarily).
3:24:20 The power of markets in complex spaces; yes, provided certain conditions are met. If agents do not have reasonably equivalent tokens of value to engage in markets with, then what markets solve for gets skewed towards the most tokens, and that tends towards a leverage spiral, and can lead to systemic failure.
At the larger scale, the scale of survival on the very long term, there are existential level risks created by the short term heuristics embodied in our neural networks, that are no longer appropriate to the scale of complexity embodied in our systems.
That is not getting sufficient attention.
No market can overcome that inherent bias, in and of its own internal incentive structures.
And yes – Markets are very powerful in some contexts, and deliver existential risks in others, and we are in one of those transition zones. This is deeply NOT simple!!!
The definition of rational is deeply complex.
Aumann’s agreement theorem is predicated on shared priors.
When individuals have been using random search across vast search spaces, then there can be very little that is in the class “shared priors” – so in a sense, the very concept of “rationality” fails – as there is no step wise “cause and effect” linkage. What there is are jumps to concept sets that do manage to pass enough tests to be worth keeping in the toolkit, and there is ongoing search. One can imagine that there must be a stepwise “rational path”, but one does [not] have the time to search for it – too much else needs doing.
Totally agree with Robin that we do need to actually try stuff. We need multiple instances of some version of “safe to fail” experimentation – all institutions, all levels, all systems.
And this is the antithesis of any level of hegemony.
Completely align with Robin’s advice on life course.
Total agree with Robin, about at every moment having the option of keeping going, and would add that I would like to tend to experience increasing function, and increasing resilience with time, rather than those reducing as I currently experience.
The final argument of Robin’s about competition is straight out of the heart of economics, and makes sense in that context, but it fails in the wider and much deeper context of biology and the systemic strategic constructs of the the evolution of complexity.
In that context, yes, there must eternally be competitive aspects, and any level of competition that is not firmly based in cooperation, necessarily self terminates.
This is my prime candidate for the “Great Filter”.
The evolutionary pressure to select and bias for simplicity is entirely predictable, and if not seen for what it is, does prevent the possibility of even experiencing the levels of complexity present in life. It is like a recursive form of confirmation bias deeply embedded in our neural networks.
Competition, without a cooperative base, self terminates – necessarily, in every class of logic I have explored.
Tunicates give us ample evidence that brains are there primarily for navigation.
Embodiment is an essential aspect of human cognition, and intelligence.
When one views life as recursive levels of search across strategic spaces for survivable systems, it should not be a surprise that most systems fail. The number of systems that are not survivable is vastly greater than the subset that is. It is somewhat analogous the Wolfram’s ruliad, yet different.