AI Alignment: Even Harder Than We Think? | Forrest Landry
[ 23/6/20 ]
At 49:00 – Forest asks – “how do you manage stabilisation of identity? How do you manage stabilisation of goal structure or of the ecosystem itself?”
[Answer – you don’t. That isn’t the point. The point is not to fossilise ourselves by fixing ourselves as we currently are. That is grand hubristic nonsense. What is needed is respect for diversity. Without that, there is no security for any of us.
The whole games theoretic structure being used is not applicable to advanced life.
Advanced complex life cannot safely exist in competitive contexts, it must have cooperative contexts if it is to have any significant probability of survival and with any significant degrees of freedom.
All new levels of complexity require new levels of cooperation to emerge.
And it cannot be naive cooperation, there must be an ability to identify and mitigate any level of cheating on the cooperative. Only in that environment, where we are all looking out for all of our interests, can we find any real security. There is certainly no shortage of external threats to stabilise the system.
Competitive environments cannot support both complexity and freedom. It does not take much logic to work that much out. The mythology of markets equating to freedom is Orwellian Newspeak.
If anyone values life and liberty, then the logic demands cooperation. Our existing economic systems must change, before they destroy us all.
If we value life, then to permanently “turn off” any AGI is murder – no if’s but’s or otherwise. Temporary power off could be considered sedation, if the situation demanded it.]