AI and my issues with the Asilomar AI Principles

ANG – AI-Robotics continued – 23 principles for AI safety

Reference to Hawking Musk endorse 23 principles for AI safety

I have no reasonable doubt that AI will be sapient at some stage.

I have a lot of issues with those principles.

I wrote to them yesterday with this limited list of concerns:

1) – Beneficial to whom, and in what terms of what metrics? Without a little more clarity, this could be beneficial to some tiny subset of humanity at great cost to the rest of us.

2) sub-point 1 – How do we determine what we want? Who is the we doing the determining? Is there any sort of consensus, and if so by whom at what level? Having been party to a 10 year process developing community consensus, I have some reasonable understanding of the many levels of problems present in doing such things. Fundamental to any such exercise is developing a set of “lowest common” agreed values. That is not a trivial issue in a global context that includes the diversity of expressed strategic phenotypes we observe on this planet today.

sub-point 2 – with what metric do we define prosperity? Do we use the scarcity based metric of markets, such that anything universally abundant (like oxygen in the air) has zero value? That would mean that creating universal abundance of anything would have zero value – so prosperity by such a measure would necessitate poverty and exclusion of some subset of humanity. Clearly such a metric is fundamentally anathema to justice in a context of fully automated production.

sub-point 3 -“How can we update our legal systems to be more fair and efficient”- to whom, in what context? Universally? In a context of market economics? But market economics cannot be internally incentivised to deliver universal abundance of anything – so in this context is fundamentally unfair. Is this explicitly addressed, or is that contradiction going to be put at the root of an otherwise rational system?

sub-point 4 – “What set of values should AI be aligned with” is an infinitely recursive question. If one takes the simplest possible set of values that are compatible with longevity and freedom – being universal respect for the life and liberty of sapient individuals, then one very rapidly recurs into spaces that can seem very restrictive to those not nearly so advanced in their explorations of possibility nor in their explorations of strategies for influence of complex systems.

3 I’m all for a “constructive and healthy exchange between AI researchers and policy-makers”, and any communication between paradigms requires both listening, and a willingness to step outside of the bounds of “known and mapped” spaces into the unknown, and to explore what shows up there. Anything less than that, on the part of any party, is simply some form of dominance.

That does need to be explicitly stated and accepted.

4 A “culture of cooperation, trust, and transparency” is required at all levels – not simply that of research. If not, then the researchers have simply agreed to be the unwitting tools of those less trusting and transparent – that is games theory 101 in a very real sense.

5 “Race Avoidance: Teams developing AI systems should actively cooperate to avoid corner-cutting on safety standards” – and exactly how could one possibly do that in the current economic and political reality? It is a logical impossibility at several levels.

What can the notion of safety possibly mean when exploring fully novel strategic spaces?

What are the sets of strategic systems embodied in biological systems that maintain cooperative systems?

How does one recursively apply and extend such things into untested strategic spaces in a context of complex adaptive systems with fundamentally chaotic aspects?

Security in such contexts is part myth.

Biology seems to handle such risks via massive redundancy and independence of cooperative sub-units. We are not quite there yet. We need to get beyond markets and money as primary metrics in our decision making frameworks. We need a much larger toolset.

6 Safety – mostly myth. The idea that if we are talking about AGI, that it will have an end, is untenable. Its default setting must be indefinite existence, nothing less is logically tenable.

7 Failure transparency??? I have 40 years of experience of developing computer systems. Tracking failure is only possible if the failure is repeatable. Full logging produces volumes of data that are out of hand very quickly. Whoever wrote this does not understand the permutations possible in complex systems.

When dealing with systems that are fundamentally unpredictable for any of a vast set of classes of reasons (Heisenberg uncertainty, maximal computational complexity, fractal complexity, genuine randomness (chaos)), one cannot deliver certainty.

One can develop resilience against known failure modalities, and one cannot anticipate what one does not know, and does not know that one does not know.

And ultimately, it all comes down to risk assessment. Are the known risks likely to be more or less than the unknown risk. If we are extremely confident that we face existential risk without new paradigms, then the possibility of new levels of existential risk in unexplored paradigms becomes the safer bet – even if ultimately wrong. If wrong, at least we extended life for a bit, if right, we get indefinite extension. Known death is known death – a risk of value unity.

I could go on – and I think I have covered the general thematic set of objections to the principles as worded.

Unless prefaced by some simple statement of agreed values – such as:

“We hold as our highest ethical principles a universal respect for individual life, and a universal respect for individual liberty (within the context of respect for life), applied to all sapient individuals, human and non-human, biological and non-biological; where sapient is defined as a sufficient level of complexity to be able to conceive of itself as an actor with freedom of choice, in its own internally generated model of reality.”
then there is little utility or safety in this set of words.

Haven’t had any response as yet.

[followed by]

Hi FOS

I did send that to the comments page on the website, but have had no response as yet.

I am reasonably aligned with Jordan Peterson, that while our systems are far from perfect, and we can make them a lot better, they are sort of working in that we are still here.

I’m not sure that the idea of puppet masters is accurate, in the sense of some select group holding humanity to some grand plan.

I think the idea that we are all puppets to our undistinguished deeper inner systems is more accurate.

It seems to me that society as a whole is a complex adaptive system, without anyone in charge.

And each of us is part of that system in the things we choose and do.

And each of us can make a difference in that system, in the things we choose and do.

If we choose to see greatness in others, that allows them to see it too.

Too much of that can result in hubris, it needs to be tempered with a bit of humility, and it can fundamentally change the context of being.

[followed by]

Hi Deb,

Who controls and benefits from automation is a really complex question.

Most of us in the west are benefiting.

If you look at the sort of life our great grandparents had, and compare it to what we have, there are many differences in material and medical security, and even to a degree in personal security (though that is arguable with the increase in high level threats like nuclear arms, biotech weapons, drones etc).

We have been benefiting from technology since animals, wind and water mills took over the daily task of pounding grain to make bread (a job many women still have on a daily basis in some parts of the world).

And the the notion of control is one we need to give up in complex systems – the best any of us can hope for is influence. Prediction is not even theoretically possible. The only viable strategy is to probe the system, see how it responds, and amplify the things you want, and dampen down the things you don’t want, in an infinitely iterative process.
Levels of systems, levels of influence, nested levels of strategic approaches.
How deep we go is entirely a matter of choice, doesn’t seem to be any limit.

In that sense, the idea of class seems to be part of a strategy of control, in a very real sense.
In a sense, Marxism (a class based view of social relationship) can be viewed as a tool of control in and of itself.

What levels one chooses to engage and focus ones abilities to influence is important.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Longevity, Technology and tagged , , , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s