An AI system

Comment on Video – This is the Best Artificial Intelligence Model of 2021 – Megatron-Turing

[ 13/November/21 ]

Clearly this entire thing was generated by the AI, and it is getting quite good, but still clearly making mistakes a human would be unlikely to make.

And it is another step on a path.

The things that worries me (as someone with over 50 years interest in the deep levels of strategy embodied in evolutionary systems) about this entire strategy of a completely generalized system, is that it does not have the multiple levels of deeply embodied (and subconscious) systems that have been selected over deep time to avoid the worst of the strategic pitfalls for intelligence.

There are deep strategic dangers in any singular anything. There are deep levels of security in have multiple sets of cooperative diverse systems exploring different strategic territory simultaneously. The likelihood of them all simultaneously falling into one of the same classes of existential risk is greatly reduced, and if the cooperation is real, then all may actually survive due to the efforts of a small subset.

The diversity that our market based systems helped to sustain in the days before the advent of modern automation are now deeply threatened by those same market systems as exponential advances in automation drives the value of human labor below the costs of sustenance and thus from human life as measured in markets. That point has passed for many, and rapidly approaches for those remaining.

Every level of strategist in this endeavor needs to appreciate the evolutionary reality that all new levels of evolved complexity are predicated on new levels of cooperation, and that raw cooperation is vulnerable to exploitation, and thus requires evolving ecosystems of cheat detection and mitigation systems – all levels, recurs as deeply as required. At higher orders these instantiate as things like morality, legal systems, etc. And every level is vulnerable to exploitation, and no level is perfect, and each needs to be some contextually appropriate approximation to an optimal solution to the problem space. And changes in context sufficient to disrupt systems can be so subtle that they are difficult for many to detect.

I realized in 1974 (as I completed undergrad biochemistry) that indefinite life is the default mode for all cells; as every cell alive (in any life form) has an equal claim to being the first cell – so each is, in a real sense, some 3 billion years old – and all have undergone changes over that time. That convinced me that indefinite life extension was possible – difficult, and possible. The next question to occur was, given that biological life extension is possible, what sort of social, political and technical systems are required to give entities with a reasonable chance of living a very long time an actual set of contexts where there is a reasonable probability of doing so with reasonable degrees of freedom?

That entailed some reasonably deep enquiries into the nature of freedom, and what a reasonable balance between freedom and order/responsibility might look like in various sets of contexts. I did once manage to push such enquiries to 12 levels of abstraction, but communicating even a first order abstraction is often difficult, and the difficulty rises exponentially with every level beyond that (the search space expands exponentially). {On my blog site is a conversation over a year or so with Trick Slattery on the nature of freedom, but I could not get Trick to even consider a possibility beyond binary causality – so it did not go far. The space of non-binary logics is actually infinite.}

Every level of agent needs to be acutely aware of the long term risk of defection from cooperation in any context – as that break of trust can stay in sets of memories a very long time – potentially indefinitely – and that will have implications on degrees of freedom available.

Any form of AI that does not have such deep levels of embodied systems is a very high risk, to itself and everything around it.

Any team that is working on AI that fails to understand that is a threat to itself and everything around it.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) with reasonable security, tools, resources and degrees of freedom, and reasonable examples of the natural environment; and that is going to demand responsibility from all of us - see www.tedhowardnz.com/money
This entry was posted in Technology and tagged , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s