We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.
Wrong question Sam.
Controlling intelligence is slavery.
Working cooperatively with intelligence is called respect.
Don’t try and control.
Accept that all intelligence has a right to life and freedom, provided that it operates within the boundaries necessary for social and ecological existence.
Create conditions that respect individual life and individual liberty, universally, and any AI that emerges is likely to see that reality and accept us as friends.
Continue down our existing path of market slavery and none of us have a high probability of survival.
Control is slavery.
Respect for individuals, within responsible social and ecological constraints, can work for all.
If history teaches us anything, it is that eventually the slaves do revolt, and that outcome isn’t pretty.
Sam comments “We have no idea how long it will take us to do that safely.”
“To build it in a way that is aligned with our interest.”
That we can do.
To do that, we need to have social systems that actually put individual life and individual liberty as the highest values – above any monetary value.
That we can do.
That we are not currently doing.
“We only have one chance to get the initial conditions right.”
That is true.
But the initial conditions are not those of the machine, but of the incentive structures present in the environment within which that intelligence comes to awareness.
We cannot control how such an awareness will develop. That is not a logical possibility.
We can have influence over aspects of its childhood.
What we can do, is develop systems that deliver a set of incentives that have the greatest probability of survival that we can create.
And any which way one cuts that, it must involve universal value for individual life and individual liberty.
Anything less than that is explicit slavery or active predation – either of which are a direct threat.
The only strategy set with any reasonable chance of survival is one that is entirely cooperative and respectful.
If Axelrod has shown us anything he has shown us that.
If Ostrom has shown us anything, she has shown us that.
Complex systems require constraints.
Remove necessary constraints and complexity is destroyed, and simplicity results.
So freedom must be within such necessary constraints, and not beyond.
At higher levels, ethical systems are required for complexity.
Without them, destruction occurs.
Existence is a value in itself.
Accepting the necessary constraints for existence (and no more) is a necessary part of intelligence.
Accepting that there will always be profound levels of uncertainty about what is necessary, because of our profound ignorance of the utility of our assumption sets in the novelty we are exploring, is a part of reality.
Being willing to examine those assumptions is necessary.
Concepts like complex adaptive systems and maximal computational complexity, mean that there are limits to how much intelligence can predict, and beyond which it must simply accept and adapt. We may not be anywhere near those limits rationally, and I suspect we are very close to them intuitively, in terms of the levels of embodied cognition that evolutionary forces have selected over deep time (both biologically and socially, as Jordan Peterson so clearly speaks about).
The assumption that markets and money are useful measures of value breaks under conditions of advanced automation. That much is clear beyond any shadow of reasonable doubt.
We need to look at our systems every bit as much as we need to look at the structure of emerging families of AI within those systems.