Quite a few themes in here that have fascinated me for over 40 years.
Q1 – To what extent is human thought controlled by the values of a market based system of exchange? It seems that markets are a great mechanism for allocating scarce resources, but cannot deliver a non-zero value to universal abundance of anything. This characteristic places market values in direct conflict with individual human interest, in a world where automation and robotic technology delivers an exponentially increasing set of goods and services that could be delivered in universal abundance. And before anyone says “but everything has a cost” consider oxygen in the air – arguably the single most important thing to any human being, yet of zero market value because of its universal abundance. We have the technical ability to develop systems that would deliver housing, food, water, energy, education, information, communication, sanitation, transportation and medical services in universal abundance, yet to do so would destroy the foundation of the current economic system, and the social systems derived therefrom, and will therefore generate resistance, even though such abundance is clearly in the long term self interest of every individual (even the wealthiest within the current system).
Q2 – To what extent is human behaviour controlled by deliberately maintaining ignorance of the increasing role of cooperation in complex evolved systems like ourselves? Looked at from a Games Theoretical perspective (as first popularised by Axelrod) it is clear the all major advances in the complexity of living systems are characterised by the emergence of new levels of cooperation, and as raw cooperation is always vulnerable to cheating strategies, cooperative strategies must adopt secondary strategies that work to prevent cheating, or perish. Axelrod demonstrated a simple class of such strategies (the retaliator class), Elinor Ostrom showed how some variations on that class have worked over long periods in human societies, and Wolfram has shown that there may in fact be an infinite set of classes of such strategies in the deeper realms of strategy space.
Q3 – If we value individual life, and individuals freedom, doesn’t that compel us to go beyond market based economic systems into a systemic space that empowers everyone to do whatever they responsibly choose (where responsibility here is defined as taking reasonable steps to mitigate the adverse effects of ones actions on the life and liberty of others, which entails derivative responsibilities to care for the environments that support us all, and to accept and cater for the exponentially expanding diversity that must result from such freedom).
Q4 – To what extent do we allow the information content of our past, as expressed in our genetic and cultural dispositions to feel pleasure or displeasure in specific conditions or actions, to determine our future? Or put more simply, why should happiness be important? In a time of exponential change, how likely is it that our past, particularly our deep genetic or cultural past, will be a good predictor of our future? I strongly suspect that the answer is that the degree of reliability or utility of such systems is exponentially decreasing with time.
Q5 – David Snowden has developed the “Sensemaker Ap” approach that has many similarities to this approach, but rather than using the measurements taken to localise to a single outcome, he uses it to enable a display of the probabilistic landscape of system response to any set of parameters. This gives one an in depth view of the sorts of system drivers that are localising in any population, and allows focus of attention on outliers, rather than on the well explored middle-ground. Does this system do such analysis, and simply not expose it to the users?
If not, could an interface be developed that could allow participants to view whatever dimensions interest them?
Q6 – is there any intention to explicitly include anything similar to Snowden’s Cynefin framework for the management of complexity? It is a highly simplified set of heuristics and it does give a very useful set of tools to any participant working in a complex environment.
Q7 – Has there been any explicit consideration given to the nature of the illusion of knowledge. It seems that the simple idea of things being true or false, that there is hard causality in reality, is one of the simplest of possible distinction sets, but it does not actually agree with modern observations of the physical nature of reality. It seems that Quantum Mechanics (QM) and Heisenberg Uncertainty and Goedel Incompleteness impose different levels of uncertainty that are fundamental. It seems that the fundamental stuff of which we are made is stochastic (random) but within probability distributions. The fact that human perceptions are so slow, and are at best of the order of a hundredth of a second, means that in the smallest unit of time, over 10^42 Planck units of time have occurred, and so the probability distributions become fairly solidly populated, and give a good working approximation to hard causality in practice, in most situations. And the fundamental uncertainty, fundamental randomness, seems to occur at every new level.
Is there any intention to explicitly build this into the decision making fabric ?
Summary – The initial characterisation of polls and polarisation is exactly counter to what complexity theory tells us. Taking the average of expert guesstimates and averaging them is a very powerful tool, and it is powerful only to the extent that the judgements are independent. If the judgements are not independent, then the effect is lost, as there is a strong tendency to cluster around the first estimate given, as social interaction overrides independent judgement (the Social Influence Bias Rosenberg later acknowledges).
Rosenberg in the later part of the interview seems to clearly keep these aspects separate but in the early part of the interview seems to actually conflate 3 distinctly different effects into one, and loses clarity as a result:
1 the ability of the average of independently polled experts to deliver judgements that are very accurate;
2 the ability of groups who take the time to uncover common values and build understanding of each other and respect for each other, and share their expert knowledge, to reach decisions that deliver high utility to all; and
3 the tendency of crowds in emotionally charged situations to move to the lowest common denominator of the oldest evolved social systems, and invoke low level reactions from all participants.
Certainly there is power in groups negotiating outcomes in complex situations, and that is a completely different domain of both judgement and complexity from the “wisdom of crowds” or crowd behaviour and confusing or conflating the three very different domains of process does not help anyone.
I am all for consensus decision making in groups, and it takes a long time to build both trust and understanding to allow such processes to work effectively. It requires a long time for the information and value sets of all participants to emerge and be understood (in so far as such understanding is possible) from all the different paradigms present. I have spent the last 10 years in such a process in coastal fisheries management. It took 5 years of monthly meeting to get to the point that we had a shared set of values, and a shared set of working understandings, that allowed us to make real progress towards specific strategic outcomes.
Crowds as groups of people in emotionally charged situations tend to be very simple and very dangerous entities – capable of destruction on a massive scale. To be avoided.
Rosenberg acknowledges Social Influence Bias, and in the later parts of the interview acknowledges that the participants in the collective decision have to be experienced and knowledgeable about some significant aspect of the subject. This is true, and it is not at all like his earlier claims about the behaviours of swarms and crowds.