[written 16/1/20 – update 17/1/20]
We can’t, at least not in the sense of having both those things existing.
We have to understand the systems that are actually present, and the multiple levels of incentive to action that they actually produce. And that is really complex, as one cannot treat individual human beings as being merely driven by incentives, though many will be in practice in many contexts – we are all more complex than that, and that difference is really important (the only thing that will actually produce real change).
We need to start by accepting that even the simplest person is more complex than anyone can actually comprehend, which is not to say that our simple models of them are not sometimes useful and accurate.
Next we need to look seriously at what markets and money are, and the sorts of outcomes that are produced in reality over time in different contexts.
The context of money and markets has changed significantly in the last few decades and will change even more in the next decade or so.
For most of recorded history most things were genuinely scarce, and markets were a very useful tool to aid freedom and development at many different levels.
With the advent of digital automation that started to change. With the advent of fully automated manufacturing it has changed again. With advanced Artificial Intelligence it is changing at yet another level.
Now we have the physical capacity (with fully automated systems) to meet the reasonable needs of everyone on the planet for most things; but the systemic incentives are concentrating more and more of the control of goods and services into fewer and fewer hands, and the competitive nature of markets is now (in the presence of exponential technology) posing multiple and increasing levels of existential level risk to everyone.
As always, the technology itself is neutral, it is what we choose to do with it that matters.
We have set up a fundamentally competitive (rivalrous) system (money and markets) at the base of our global system.
When most things were genuinely scarce, and technology was relatively simple, that system did in reality generally lead to higher levels of cooperation; and was actually of genuine benefit to the majority of people much of the time.
That started to seriously change as technology began to seriously increase the leverage of power available to individuals. Now bullies don’t just have 2 or three times the power of those they abuse, but can have billions of times the power.
In chimpanzee society, 2 nearly as strong individuals can cooperate to take down a leader that has crossed some socially acceptable threshold.
Now in human society, we have developed such powerful weapons that any all out conflict threatens our entire species and quite a significant fraction of other life on the planet.
Rivalrous (competitive) games spaces are simply no longer stable. The context has changed, fundamentally and permanently.
In times past, leaders needed people to be productive to give them the goods and services that they controlled. Now, with fully automated systems, that is no longer the case. The controllers of those systems have power completely independent of the rest of humanity. And that process is exponentially getting more dangerous.
There is no competitive solution to that problem space that is stable. It is a self terminating trajectory – always.
We are on that trajectory because of a set of fundamental misunderstandings, oversimplifications of reality.
One oversimplification is about the nature of complexity and reality. Some people still think that all systems may be known and predicted; but reality is (beyond any shadow of reasonable doubt) far more complex than that. The simplest model I have encountered of complexity that gives reasonable utility comes from David Snowden and has 4 classes of complexity – simple, complicated, complex and chaotic.
Humans tend to classify everything as either simple or complicated, and ignore or mischaracterise the complex and the chaotic. The more stressed we are, the greater the tendency of our subconscious to present our conscious with simple options (the simplest being binaries like true/false, right/wrong, good/bad) – and there are very strong evolutionary reasons for our brains to behave in this way that worked in our past, but pose exponentially increasing risk in our current context.
Very little of reality is simple.
Most human constructs are complicated, and a few are complex.
Most of reality is complex, and some of it is chaotic.
Neither complex nor chaotic systems can be predicted, but for very different sets of reasons.
Complex systems will always respond in unexpected ways, so one has to take an iterative approach to working with them, of probing, sensing how they respond, dampen down things going in ways you don’t want, amplify those going in desired directions, and repeat.
Chaotic systems obey no predictable rules, by definition.
One way in which most people have over simplified is in respect of evolution.
Most think of evolution as competition, but that is only a small part of the picture.
In terms of complex systems, evolution is much more about cooperation than it is competition.
Some who champion the use of markets use the justification that it is the competition of markets that allows for cooperation to develop; and there is a degree of truth in that, but it is not competition as such that does the job, it is the threat that competition poses that does it. The mistake is thinking that “competition” is required to help stabilise new levels of cooperation, but any source of “threat” will do (provided that there is some mechanism by which cooperative behaviour can mitigate the threat) – it does not need to be the threat of competition.
We are not actually short of threats at present. Climate change is one of the lesser ones, but it is quite sufficient for the purpose of establishing a new level of global cooperation.
And it is important to have a clear distinction between cooperation and control.
Global cooperation is nothing at all like global control.
Global cooperation is about reaching agreements about the sorts of boundaries required for survival and freedom. And if the idea that boundaries are required for freedom seems odd, then spend a little bit of time thinking about it. We don’t give our toddlers the freedom to walk out onto highways, or to walk off cliffs. Such paths are self terminating. The freedom to self terminate without intending to do so is not compatible with a respect for life. To the degree that we reasonably can, a respect for life (our own or anyone else’s) demands that we put in place reasonable boundaries that have a reasonable probability of keeping us all safe. If we think seriously about it, we all accept (always have done) that freedom must have boundaries, to protect the lives and liberties of all. And all such things are open to capture by invasive strategies that “cheat” on the cooperative, and so require entire ecosystems of anti-cheating strategies (what our legal systems are supposed to be, but for the most part they have been captured by higher levels of cheating strategies).
So we live in very complex times.
Solving the climate change problem is relatively simple, inside of a globally cooperative context.
There is no way to maintain global cooperation in a system based upon competitive markets (where the value measure is based in scarcity, and delivers zero for anything universally abundant). There are just too many perverse incentives (all levels).
So we need the levels of cooperation that we see within corporations (and much more) and we need them without the market context.
Climate change gives us a non-competitive source of risk that can be countered through cooperative activity.
Autonomous technology gives us the tools to meet the reasonable needs of everyone, without needing to take anything from anyone.
It is a confluence of contexts that allows for the emergence of a new level of cooperation, and entirely new levels of security and freedom for all as a result.
And that is only possible in a truly cooperative context. At this level of technology, any form of seriously rivalrous or competitive game is self terminating, but has the characteristic of taking all or most of the rest of us with it. So we all have a very serious responsibility to be as alert as we can be to the emergence of anything that looks like that.
So the only realistic answer to the question, is by changing the “game space” and progressively removing money and profit from the equation. And that is seriously complex, and will require many levels of system development to do all the really important and essential functions that market systems currently do, without the self terminating incentives that markets have in the context of fully automated systems. And it is not a stable solution to say that we remove automated systems, as we need those automated systems to remove many well characterised risks that have no other stable solution.
That is all doable, we just need to do it.
Trying to do it via any form of centrally controlled mechanisms is almost certain to fail.
What is required is multiple levels of “safe to fail” experimentation.
Those can be kept “safe” via multiple levels of communication and oversight (multi-directional).
So the short answer to the question is by moving from a competitive rivalrous “game space” based on money and markets, to a cooperative game space empowered by automated technology.