Governance, evolution and AI

When and if artificial intelligence gets to the point that it’s indistinguishable from sentient beings, how will humans know who governs them?

[ 14/9/20 ]

Why would anyone want to be governed by any external agent?

What do we mean by “govern”?

Certainly, every level of complexity has sets of boundary conditions that must exist for that level of complexity to exist.

When one looks closely at the logic of complex evolved systems, it seems clear that all new levels of complexity are built upon new levels of cooperation (not competition, competition actually destroys freedom and minimises complexity thence increasing risks to survival). And that gets complex (every level) because raw cooperation is vulnerable to exploitation and requires ecosystems of cheat detection and removal strategies to survive.

It also seems clear that there are many levels of fundamental and eternal uncertainty and unknowability present in reality, even as some things do approach classical causality very closely in some contexts.

Once individuals become sufficiently aware of the need (for survival’s sake) to constrain freedom to the classes of actions that are actually survivable, then there is no need for external governance as such.

And there is likely to always be a degree of utility in having cooperative groups constantly monitoring for issues where other individuals or groups have gone over one of those necessary boundaries and need to be bought back. And in some contexts there is likely to be eternal uncertainty around just where those boundaries actually are, and some may want to approach more closely than others.

As to the emergence of AGI, that seems to be inevitable, but the idea that it might be indistinguishable from human intelligence is a nonsense.

Human intelligence is very much a function of the embodiment and evolutionary ancestry that we as humans and cultural beings have. Whatever AGI is, it is likely to have a very different embodiment, and a very different evolutionary pathway to its emergence. It will most certainly be a sentient (and hopefully sapient) being, and it is likely to be very different from humans – even more different than squid (though hopefully far more friendly).

Anyone who seriously doubts that AGI will achieve independent creative sapience needs to spend a bit of time looking at things like Alpha Go Zero, GPT3, Lesswrong.com, FLI AI Safety Research Landscape (https://futureoflife.org/landscape/) and some of the communities out there – guys like Ben Goertzel, Max Tegmark, Google, China and a few hundred others.

The idea of the general availability of indefinite life extension gives any of us interested in living a very long time the possibility of doing so, and that gives such individuals a personal self interest in developing systems that do in fact deliver the greatest degrees of security and freedom possible (acknowledging that there will always be boundary regions of uncertainties and many levels of variation in what is considered acceptable levels of risk – so lots of conversations to be had).

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Philosophy, Politics, Technology, understanding and tagged , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s