Email to Daniel S after watching YouTube Making Sense of Sensemaking: Daniel Schmachtenberger, Jamie Wheal, Jordan Hall – Top Chat Replay
Enjoyed the conversation between you three.
I think Jordan has it seriously wrong in a really deep sense.
Sure, all the tools and ways of thinking he mentions are useful and worth investigating and building competencies with, but they are not the issue.
Coherence is not the issue, though it is lovely when achieved.
Dunbar’s number seems to be about the sort of stability that can maintain cooperation – and that has to do with limits on accurately assessing and recalling the behaviour of others – as to whether the strategies they are using are cheating or cooperative – at every level – and there are multiple levels (both conscious and subconscious).
Getting new levels of cooperation to scale is about similarly enabling accurate identification of individuals and strategies at multiple levels.
Putting anything like that into the common “cloud” is not safe, it is too prone to single point of capture or failure.
We can have and use the “cloud” and we must also retain independent data and computation that is isolated from cloud memory and identification – if we are to achieve stability and security.
Independence is every bit as important as cooperation when it comes to scaling cheat identification and removal.
We have no shortage of external threats with which to stabilise cooperation, that is not the issue.
We have very real issues in the fact that most people cannot deal with the real threats that exist. The fact that my wife and two children all suffer from severe anxiety is probably a function of my communications about the threats I see, but my inability to create in them a confidence that those threats have mitigation strategies. Their minds don’t seem to be able to effectively scale projections of social and technological trends exponentially like mine can and does.
So the current global approach of using climate change is a useful one in this sense, of being something real enough to invoke a need to cooperation, but not so immediately terrifying as to put people into disabling anxiety.
Most people freak out completely when you start getting explicit about large scale but infrequent geological and cosmological risk factors, let alone the technological, biological and sociological suites of existential risks.
What needs to scale is global cooperation, across multiple independent levels of networks.
That demands evolving ecosystems of cheat detection and removal systems, and that gets really complex as levels of cheating strategies infiltrate levels of networks.
I have on a few occasions managed to push that to about 12 levels of abstraction – but confidence degrades as one necessarily degrades resolution to get there, and the ability to communicate much at all past 3rd level abstraction drops to close enough to zero.
So some things we just need to keep personal.
I am clear about a few things that are required for scalability:
Individual sapient life must be value #1 – that includes all entities capable of conceiving and naming themselves as agents in a game space – human and non-human, biological and non-biological.
Freedom of such individuals to do whatever they responsibly choose must be value #2 – where responsibility requires that all agents take reasonable steps to assess and mitigate significant risk to the life or liberty of any other as a result of choices made, and that if such risk is inadvertently created, then all reasonable steps are taken to remove it. Perfect prediction of anything is impossible, so we must all accept that some significant fraction of our time will be involved in cleaning up messes we were part of making. That will eternally be the case.
So that also resolves into a demand for ecological and social responsibility – which becomes an eternal exploration of ever expanding sets of boundaries required for the existence of complex systems.
For those of us that are capable of dealing with things like exponential trends, and multiple domains of existential risk, we need to do so in ways that we can create workable solutions across multiple other existing domains (many of which are incapable of dealing with what we are dealing with, but can simultaneously do things we cannot – eg my wife is a concert level pianist, something I will never be).
One idea that we must find workable solutions for, is the fact that we need exponential technology to solve quite a few of the existential level risks, but that producing such technology breaks the scarcity based system of valuation currently embodied in markets and money; and that the market system currently performs many levels of essential distributed functions (even as it is simultaneously a haven for cheating strategies). Some sort of universal basic income seems to be a useful transition strategy, but I don’t see it as any sort of long term solution. The next 100 years or so it could be really useful.
We need indefinite life extension – to allow us to develop reasonable levels of appreciation for the levels of risk and levels of mitigation strategies and technologies required.
We similarly need exponential tech.
Either of those in a competitive system leads to rapid collapse.
Thus we need global cooperation as a precursor – multiple independent levels of networks.
So to me, Jordan just seems to miss the bus.
“The Dao that can be named is not the Dao” is one of the best practical descriptions of complexity in infinite domains that exists – it doesn’t need to be any more complex than that.
Getting comfortable with eternal uncertainty is a prerequisite of getting to grips with complexity.
Seeing that there are infinite domains of possible logics, of which binary logic is but the simplest, is a step on a path, and one many logicians and mathematicians cannot yet take.
Seeing that evolution does not require certainty at any level, just useful degrees of fidelity, is something classical logicians can have a great deal of difficulty with.
Appreciating that we all live in our own personal virtual realities, even as it seems very probable that we also live in an objective reality (its just impossible for us to have any objective access to objective reality, as by definition our only link to it is via our personal virtual reality, is too much for many to get).
So we have issues.
We need to have some individuals using all the tools that Jordan speaks of, but in and of themselves, they are not the solution.
The solution seems to me to be a kind of humility and a kind of duty deeply woven into our understanding of freedom; and an acceptance of the absolute need for universal cooperation – if any of us are to have any significant probability of survival.
How to do that, without sending the vast bulk of the population into anxiety and breakdown???
I haven’t done so well in my own family, and that fact weighs very heavily upon me, even as I keep on going.
I have to maintain a low discount rate on far future benefits (which with my understanding of exponential technologies is not that difficult), and be willing to accept whatever shows up between now and then; and to attempt to be responsible (to the best of my limited abilities and knowledge and understandings).
Game B groovy coordination is not the game. Game B is finding ways to cooperate at scale where agents employing cheating within group can be reliably detected and repatriated to cooperation (that cannot work in a context where market values dominate – so we must go beyond scarcity, beyond markets, beyond money, into technologically empowered abundance).
Human beings are examples of this at scale, from the cellular perspective. We have 10,000 times as many cells as there are people on the planet, and for the most part we keep cooperative systems running, and the whole relies on many levels of diversity (as you note so well later in the conversation).
Around 51:50 Jordan says dismiss everything post teen – [to which I say DANGER !!! BULLSHIT!!!!! ] then he goes on to say “all the code you have been running that partitions reality into prefabricated ‘True’ and ‘Good’”, with which I can agree in a real sense; but I started doing that seriously at about age 9, and have been seriously exploring levels of that for over 50 years, and seem to be a very long way from where Jordan’s conception of “normal” is. I heard Eric Weinstein recently use the phrase “normal is the entire distribution, not the mean”, which is something I have been consciously working with for over 4 decades.
When one starts embodying that across structures with thousands of dimensions, it demands acceptance and respect for diversity that few seem to appreciate, and I am fairly certain Jordan is missing a deep and essential set of aspects to that.
Around 54:00 Jamie talks of the evolution of our sense of self, and seems to get some essential parts of a very complex suite of parallel evolving systems quite well poetically expressed.
At 56:00 Jordan says “Obviously we don’t know how, otherwise we already would have”, which to me is wrong. The how is quite simple, and in a sense has been known to many traditions for a very long time, embodied within them as Jordan Petersen would say. The how isn’t the issue.
The issue is providing a suite of “stories” for all the many different levels of story telling apes that exist (all the many levels of homo narrans), that produce a workable coherence.
If there is one key word to that, it is “cooperation”.
And that isn’t enough, it needs supporting structures.
And those supporting structures come in “responsibility, respect, diversity, creativity” applied across every level of paradigm, tradition, poetry and logic.
The value of the individual must be paramount, and all individuals must acknowledge their responsibilities to the collective. It is not an either or thing. We are all both, and so much more.
56:40 – Jordan – things more right than wrong – you – “generator function more than output”.
Like most of what you say over the following 10 mins.
Jordan – speaking of “the third”. The third can be a useful analogy in some senses, but doesn’t seem to me to be what is actually what is going on. Database theory has clearly shown that the fastest possible search strategy is random search. But how do we know we have found what we want? What are the patterns in use? What are the levels of valence?
We are each far more complex than we can possibly consciously appreciate. We each have our subconscious systems randomly (at least to some degree) searching our own knowledge spaces and providing the ecosystem from which speech emerges. Speech is vastly more subconscious than conscious, however conscious we are.
So in this sense, of the vast amount or processing done by others, giving our best approximations to truth opens possibilities for others.
We bring to the network and the entire network gets to benefit.
And that can be broken and subverted at many different levels – and we each need to cultivate our own awareness of that, at multiple levels.
1:09:35 Jordan –
1:19:06 Jamie – accurately describes how we gained abilities – the survivors made it, the rest didn’t. We have a lot of such information encoded now. But right now our IP laws are making much of that unavailable to many, and pose risk to many others.
1:23:10 – Hi Daniel – thank you for acknowledging me/us.
1:24:30 – Spot on.
1:26:50 – we can actually rationally get there. [Kind of, sort of, not really.]
1:29:00 – Jordan asks about the “feeling” of it. What he seems to fail to grasp (or perhaps just acknowledge) is that “feelings” seem to be, by definition, some set of proxies for things that worked in the contexts of our past, and as such are not necessarily adapted to our exponentially changing present and future”. And that is a really tricky and dangerous idea. Because some feelings will be relevant, and some less so. So our future seems to rest in us each developing a conscious awareness of our own feelings, and a rational awareness of their likely utility in any specific context, and being able to dance in that space with respect, humility and grace.
1:30:00 Jamie – the patients and tolerance to let everything unravel in order for the new form to emerge. Which is great, so long as nothing existential unravels. Our problem is that we seem to be moving into spaces of existential risk that cannot be spoken of publicly without becoming self-fulfilling prophecies in a very real sense.
1:36:36 Rule Omega – If you say something that sounds ridiculous, I should give the benefit of the doubt. [Agree – with you and Eric Weinstein].
Successive approximations removing noise, looking for signal.
1:40:40 – Agree completely. Perspective is necessarily incomplete.
1:44:00 – Jordan on knowing – I’m a kinda – sorta – almost. To me, clearly, we all rely on our personal subconscious systems, which are primarily hacks at some level, some simplification of the complexity present that works in the context. Questions: Are we still in that context? Is that hack still useful? Is there something better for this context? Are there contexts present that are not even visible to the hack sets currently dominating my neural networks?
We need to be responsible, recursively, all levels, for that – to the best of our limited abilities, in the full knowledge that we will fail from time to time; but we can’t let that stop us from acting where it seems necessary to act, even if we are in a minority of 1 in doing so (which can be really hard for social apes to do).
1:44:30 – you – new structures required – agreed – 100%
1:54:30 – Jamie – great questions.
1:57:30 – open source.
1:58:20 – Safe to fail probes – agree – including safety to individuals and groups.
2:00:40 – No such thing as protocols for group coherence – not sure that is correct. Many levels of such things seem to be real. And in a sense, coherence is not required. Transition, from fundamental competition with cooperative substructures to fundamental cooperation with competitive games in some restricted game spaces – seems to be what is actually required.
Sure, encourage people to explore states and stages; and from my experience, there is an infinite domain of such things – there is no linear path, no dogma, no right way. Making out that it is fundamental to our survival is false. It isn’t. Cooperation, all levels, yes – certainly that. Can we instantiate it? Yes – I believe all the game theoretic tools are present on the game board. Do I favour Snowden’s safe to fail approach? Most certainly. And we do not have a lot of time. Patterns play out on their own timeline, irrespective of what you or I might like.
2:21:00 – nature of desire [valence]. Becoming vs being – [seems unreal to me – more like “being becoming”]
2:31:20 – mytho poetic catalytic mimetic – I kind of like – points to something.
2:33:30 – exponential tech – strongly align on much of that. We need it to solve problems, and if we cross that barrier in a competitive context, survival probabilities are very low – Game A (and even most current instantiations of Game B) fail in existential level ways.
2:34:20 – enlightenment or bust (cute)
2:38:00 – best thing Jordan has said all interview – closest to reality.
As a final followup, as someone who has put a lot of time into Landmark (not for over a decade, but 25 years ago – a lot) – I have no shortage of arguments with what they do, and there is a certain degree of truth in what they do. And there are certainly dangers, and I have seen quite a few “train wrecks” come out of it.
And I did learn very valuable stuff doing their stuff – not as dogma, but as “practice”, heuristic and observation of the human condition that I was unlikely to experience any other way. I don’t agree with all of their explanatory framework, and there is no denying the reality of what happens.
So – yes – great conversation – helped me clarify my thinking.
Welcome any further insights you might have.