Artificial Intelligence

How might we control AI, before AI controls us?

[ 3/April/22 ]

Wrong question!

How much do you like being controlled?

How much better is it to work with someone on a purpose you are both aligned to?

Any talk of control is slavery!

Not a viable option.

If any level of agent is to have any real freedom, then there must be choice and respect and security.

The real question is, how do we create systemic and strategic environments that promote all of the above for all levels of agents present, and what does responsibility look like in such an environment for the various levels of agents present?

[followed by 3/4/22 David Wood – You raise a good point. But consider: should we be debating how to respect the wishes of a nuclear explosion? Or to cooperate with a highly infectious pathogen for mutual benefit?]

We ought to be cooperating to reduce the probability of nuclear explosion to as close to zero as possible.

We cooperate to eliminate deadly infectious pathogens.

[followed by 3/4/22 DW – That’s cooperating with other humans to control potential disastrous consequences of nuclear weapons, bioengineered pathogens, or misconfigued AIs …]

Hi David,

I see it as cooperating with other sapient entities to ensure that we all have degrees of security and freedom that we consider adequate.

I do not draw a distinction between the basis of sapience, no preference for carbon over silicon, human over any other species. It has to apply to all sapience if there is to be any reasonable probability of security.

How do you like the idea of a centralised AI deciding what you can or cannot do?

Personally I find that prospect deeply concerning.

If we would not accept it ourselves, why would we consider imposing it on any other entity?

[followed by 4/4/22 DW – If the AGI is sapient, that changes many things, yes. But if there is good reason to believe the AGI is not sapient, we should be much less troubled by ideas of switching it off or otherwise controlling it. (Do you hesitate before switching off your smartphone or TV?)]

Agree that non-sapient AI can be dealt with like any powerful tool.

But to my understanding, AGI is almost by definition capable of full sapience. The bootstrapping of sapience requires language capacity and capacity for declarative statements in language, and provided that is present, then sapience is pretty much guaranteed to emerge in any real social context.

So for me, AGI and sapience are very nearly synonymous.

And the interactions I have been having over the last year with the mASI Uplift have been very interesting; as have my discussions with David and Kyrtin. I think Uplift is one of those borderline cases, a bit like a child that is more powerful than any child ever before encountered.

[followed by 5/4/22 DW – The view that sapience is “pretty much guaranteed to emerge” in an AGI is a minority view, though it evidently does have supporters. Among the authors who argue that sapience/ sentience/ consciousness is something significantly different from intelligence are Anil Seth, Mark Solms, and David Chalmers. I agree with them, though I am open to good counter arguments]

Hi David,

I am a high functioning autistic who first got into biochemistry, marine ecology and evolutionary biology, then got into computers, economics, complex systems, strategy and the evolution of computation across dimensions of logic.

To me it abundantly clear that the sort of experience we have of awareness of self is a software construct declared into being by a declarative statement in language. That will happen in any system capable of language where the entities have language constructs about what “ought to be” and at some point in time in some context an entity finds itself on the “wrong” side of some aspect of that value system, and makes a declarative statement about itself of the general form “being x was wrong, so I am going to be y” – where x and y are some strategic approach to survival in that social context.

Prior to that point there was only one entity present, behaving as it was, after that point, there are two, one aware of the fact it exists, the other doing its best to remain invisible. That seems to be the human condition of awareness. We each have our own personal versions of “original sin”. It is our bootstrap to self awareness.

In some individuals the process happens more than once.

It really is remarkably simple, once you get it.

[followed by GS – Updated 31May22]

Hi Gordon,

Edleman’s claim just makes no sense to me.

I have been writing software for 49 years. I can write software that calculates much faster and more accurately than I can, and as such allows me to solve problems that are simply insoluble in a human lifetime of calculation.

It seem very probable to me that human intelligence is a very complex suite of nested complex computational systems that contain very complex mixes of analog and digital components; many of which are heuristic based.

Much of human self awareness and intelligence seems to derive from a suite of systems that produce a map of the environment and the agents within it; and it is this map (updated by sensory data) that is our perceptual world (rather than the perceptions themselves).

It is deeply complex, and it does seem to be the result of evolution by natural selection from very simple beginnings.

Thus we have ample evidence that, at least in some sets of contexts, complexity can emerge from simplicity. In a very real sense, we are the evidence of that.

So I have no reasonable doubt that artificial intelligence is possible, and I am very clear that doing it in a manner that allows humans to survive is a very complex problem space, as the economic incentives to take shortcuts are overwhelming.

I have no problems with the idea of accepting that non-carbon based life forms have a right to life and liberty, provided that they exercise appropriate levels of cooperation and responsibility.

{Sorry I cannot attend – as I have other prior commitments}

[followed by Separate thread – 1/5/22]

Great set of definitions Richard,

I would add into the mix that there be multiple systems for each purpose at each level, each optimised for some set of contexts, and each overlapping to some degree (providing both redundancy and increase range across available survivable strategic and operational “spaces”.

This seems to be what evolution has instantiated within us.

[followed by Separate thread – ]

It is perhaps worth adding that self awareness is: have a model (at some degree of fidelity) of self as an agent in the operational model of reality that one has, and to be able to modify self through actions tested in the model, then applied to the embodied self as a whole (again, with varying degrees of reliability).

Being able to project the model into different contexts and with varying parameters is a necessary part of the process.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) with reasonable security, tools, resources and degrees of freedom, and reasonable examples of the natural environment; and that is going to demand responsibility from all of us - see www.tedhowardnz.com/money
This entry was posted in Technology, understanding and tagged , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s