Death of the Invisible Hand?

The Death of the Invisible Hand: Why the Narrow Pursuit of Self Interest Always Fails

Regulation comes naturally for small human groups but must be constructed for large human groups.

Hi David,

Some good points in your argument, and some that are simply wrong. On the whole guilty of trying to over simplify what is a very complex situation.

The major categories are:
The context of self interest;
Levels of complexity in human beings, and the trap of over simplifying (Goodhart’s Law);
Strategic interactions and context sensitivity;
Stabilising strategies do not have to involve regulations as such;
Discount rates on future benefits vs exponential technologies;
Rational choice vs heuristic hacks and the mix and match and evolution of both;
Where markets and automation come into fundamental conflict;
The myth of rules in complex systems.

There is nothing wrong with self interest, provided that it is in a context that is sufficiently aware that it includes the exponential increases in future benefits coming from exponential technology, has a relatively low discount rate on future benefits, and includes a reasonable expectation of living a very long time. In such a context, it is always in one’s long term self interest to cooperate with others in responsible ways in both social and ecological contexts, while at the same time retaining one’s liberty and independence (in as much as either of those things actually exists).

Human beings are really complex. It seems that we have about 20 levels of complex systems present in us. About half of those levels involve mostly physical systems and are mostly influenced by genetics and physical environment, and about half are mostly software systems, and are most influenced by the context of other software systems, and all of them interact (both within and between levels). We each contain many instances of Turing complete computational systems, but evolution doesn’t care about Turing completeness, it deals only with differential survival in the contexts of costs and benefits actually present. Any heuristic that works in practice with a frequency that means it is more effective in a population than a more flexible but more computationally (energetically) costly alternative, will be selected for (hence, in a sense, the desire for simplicity that we see at every level, including in this forum). Which leads us to Goodhart’s Law from complexity theory “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes”. Which is another way of saying that when dealing with real complexity (like human beings) any simplifying heuristic, taken out of the context in which it existed, will fail. And any change in the system, like using the heuristic for control purposes, changes the context sufficiently to invalidate the heuristic. Simple heuristics never work for long in complex systems where context is changing.
We live in times where many levels of context are changing exponentially.
No simple heuristics are going to be useful consistently.
We need to become very comfortable dealing with real complexity and profound real uncertainty.

You correctly identify that cooperative systems require attendant strategies to prevent invasion by cheats – games theory 101 that most here should be aware of. However, you then go on to make the unwarranted assumption that those strategies need to be in the form of regulation. That is not a valid assumption. An infinite class of alternative strategic sets are available.
The other part of the necessary conditions for cooperation to out perform competition is not mentioned, and that is the presence of sufficient abundance for all. Only if there is genuinely enough for all will cooperative strategic systems deliver greater benefits for all than competitive strategic sets.
To be successful in the deepest of strategic senses, cooperative systems require both effective attendant anti-cheating strategies (an ongoing process of exploration of an infinite strategic space) and a context of sufficient abundance for all.
It is not enough simply to have such abundance present, it is actually a requirement that every individual actually have both a physical sufficiency for their needs and their long term security, and that they are using an experiential model of reality that allows them to experience such abundance. Having the abundance present isn’t enough if the belief structures and interpretive schema in use do not allow the individual to experience it as such.

One of the biggest issues we have is the rather short term modelling that most people do, and the rather high discount rates on future benefits that most cultures and people have.
Evolution has equipped our brains with linear predictors, which worked well in most of our evolutionary past, and were great for predicting regular food sources, or extrapolating the future position of a stalking predator. However, they don’t work well when dealing with exponential change. In the short term, there is little difference between a linear and an exponential sequence, the first two terms are identical, and the third is only one different (linear – 1,2,3 exponential 1,2,4) however, by the 30th term, the linear is at 30, the exponential at a billion. Many exponential technologies are currently doubling in under a year on many key indicators. Once these systems achieve broad spectrum and energetically efficient molecular level precision in manufacturing, the entire game changes. If it takes that first machine two weeks to manufacture a second one, then within 2 years there can be one for every person. Personalised manufacturing. We can already do molecular level manufacturing, but only within very narrow contexts as yet, and energy efficiency is poor. That is going to change, exponentially.

Rational choice versus heuristic hacks, and what human beings do in practice.
We all have thousands of heuristic hacks present in us organised in about 20 levels of systems, and sensitive to different contexts in different ways. Many of those we get from our genetic history, in the structure and function of body and brain, and many more we get from the unexamined assumption sets implicit in the cultural paradigms we happen to get born into.
All human behaviour is a complex mix of these heuristic hacks and rational choice.
To the degree that we take the time to learn about the heuristics present in both the genetic and cultural realities within us, then we gain some level of rational choice over them.
To the extent that we lack such awareness of the influence of context upon us, then we are subject to influence by others with greater awareness of such influence, and with the ability to alter contexts to influence outcomes in ways that benefit them (cheating strategies in a sense).
And not all reality is some sort of conspiracy, much of the reality of complex systems is beyond prediction, it is random or chaotic or unpredictable in many fundamentally different ways.
And acknowledging all of those many classes and levels of unpredictability, it is still possible for a rational agent, with a sufficiently strong expectation of living a very long time, and with a sufficient awareness of the exponentially increasing benefits possible from technology, to rationally choose to forgo short term benefits in favour of the hugely greater long term security, freedom and physical benefits available from adopting sets of cooperative strategies (with evolving sets of attendant anti cheating strategies) in our present.
It is actually rational to be cooperative, and to be as aware as possible of the long term social, ecological and physical consequences of the actions one takes now, provided one can see that exponential technology is capable of delivering greater benefits to all in our future.

Which brings us to the greatest conflict of our age, the direct conflict between market values and human values.
It is undoubted true, that in contexts of genuine scarcity (which have actually existed over most of human history), markets have tended to deliver many benefits, across many domains, the variety and abundance of goods and services, the freedom of individuals, individual security, distributed social coordination, to mention just a few.
However, there is a catch.
Markets are fundamentally based in scarcity, in exchange values. There is no exchange value in universal abundance, so no market mechanism will ever (of its own internal incentive structure) deliver universal abundance. Markets require some to be in poverty for others to experience abundance. If you doubt that, consider the market value of air – arguably the single most valuable thing for every human being, yet of no market value in most situations due solely to the fact that it is universally abundant.
In most of history, that was not a problem.
But now we have exponentially expanding automation capacity.
Any fully automated system can deliver universal abundance (provided the system has sufficient mass and energy as inputs, and we are not short of either, the sun emits enough energy for every individual currently alive to have more than humanity as a whole currently uses, and there is plenty of mass in this rock we live on).
But any such universal abundance has no market value.
In attempting to slow this devastating impact of automation on monetary values, we have seen an explosion of “Intellectual Property” laws (a legal mechanism designed to prevent universal abundance, and maintain levels of marketable scarcity and profit which our exchange based economic system requires). And vast amounts of human misery and death are directly attributable to these laws.
Automation changes everything.
As full automation of processes expands from the realm of pure information into the physical, then markets become the single greatest existential risk to us all, as market values come into direct conflict with the values of the majority of humanity.
We are on the cusp of that reality right now.

The last piece of this puzzle is the myth of rules in complex systems.
David Snowden has created a great little simplification of decision making in complexity that he calls the Cynefin Framework. He divides complex systems into 4 categories, simple, complicated, complex and chaotic, and clearly shows that rule based systems can only deliver useful outcomes in simple systems. In all other systems, to get useful outcomes, individuals need sufficient freedom to develop understandings themselves, to make mistakes, and learn from the constantly evolving reality that complex systems deliver.
He develops the notion of “safe to fail experiments” and the need to include a significant level of randomness in the selection of which experiments to try – due to the many levels of “expert bias” present in high level decision makers.
So it seems that we must all be socially and environmentally responsible, and that does not necessarily mean following any set of rules.
It seems we must all be given respect and freedom, and that freedom is not any sort of unrestricted license to follow whim of fancy, but comes with a set of responsibilities.
The nature of those boundary conditions, between freedoms and responsibilities, is likely to vary substantially in different contexts.

When one starts to seriously explore what freedom might mean, in a set of contexts that have a potentially infinite set of levels of awareness, and each level is potentially infinite within itself, then one must develop a profound tolerance for diversity, which is not at all the same thing as accepting cheating strategies.
No person can explore any infinity, let alone an infinite stack of them.
There is something fundamentally humbling in such an awareness, and also profoundly empowering.
Maintaining the benefits of cooperation imposes on each of us a requirement to expose and appropriately punish cheating strategies (something our existing legal systems do very poorly).
Ostrom clearly showed that there is a very narrow band of punishment that is stable. Punish too little, and there is incentive to cheat again. Punish too much, and there is no incentive for the transgressor to return to cooperative action. Most legal systems have both maximum and minimum penalties, which is the exact opposite of what logic demands. Just one more example of rule bound systems producing profoundly perverse outcomes (or alternatively cheating strategies dominating the cooperative).

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in economics, Ideas, Our Future, understanding and tagged , , , , , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s