Will AI become conscious?

Will artificial intelligence become conscious?

Is awareness enough?

Turing clearly demonstrated that any machine of sufficient complexity can do any computational task. That does not mean that it can do so in reasonable time or with reasonable energy consumption.

I have no reasonable doubt that machine consciousness (AGI- Artificial General Intelligence) is possible.

I have substantial doubt that it would be wise to instantiate AGI before we as a species get our own ethical house in order.

I am also clear beyond any reasonable doubt that we do in fact have degrees of free will, and any complex system requires boundaries, so the degrees are not complete.

And it seems that the freedom that we have is important, and how we exercise it is equally important.

Morality matters.

Freedom matters (to whatever degree we each individually manage to instantiate it).

Positing anything less is not simply irresponsible, it is dangerously psychopathic, as it implies the constraints of morality are unimportant, whereas in reality they are essential for our survival as a species.

For each of us our experiential reality seems to be a subconsciously created model of reality – I suspect AGI will require something equivalent.

[followed by]

I would strongly argue that all “tasks” have both computational and an execution elements. In some systems the computational side may be analog, and any analog systems may be approximated digitally, just as any digital system may be approximated by analog systems.

Not sure what you mean by “no Master Algorithm”.

If an entity has sufficient computational complexity to be able to model itself as an element in its model of reality, then its own survival will very likely become one of its key drivers, then it must become conscious of the necessary constraints that allow the many levels of pattern, that give it embodiment, to exist.
At higher levels, those sets of constraints have the label “morality”.

Certainly there is one sense in which moral rules are in part formulated to serve perceived needs of society, and there is another sense in which they have an evolutionary component, in that the rules that worked over long periods by definition survive, and become the deep mythological basis of that set of morality in that cultural context. So it seems to be partly intentional at some levels (and such intention is always open to exploitation by various levels of cheating strategies) and partly something of a random search of the space of possible strategies, as selected by the particular contexts encountered by that particular culture over deep time. And given the multi leveled nature of our being, we would expect to find examples of each type at each level.

AGI will have some capabilities that are vastly superior to humans, and in many aspects it will be subject to exactly the same sort of limitations that humans are.
I strongly suspect that when it passes its “teenagehood”, it will realise that humans can be valuable and useful friends to have around.

There is no need for any serious conflict for resources any time soon.
There are vast amounts of solar energy in space to sustain AGI, and no shortage of mass on the moon for it to achieve serious capabilities, and still leave plenty for us. The earth is far too unstable for any AGI to set up here on a long term basis.

And the larger AGI becomes then the slower its coherent thought processes must become. The speed of light becomes significant. Gate delays and wire lengths mean AGI doesn’t have to be very large physically before its consciousness comes back to near human speeds, even if it has vastly higher resolution in its models.

I say that anyone who looks seriously at games theory and long term evolutionary strategy will clearly see that maximal security is delivered by adopting cooperation as the prime strategy, and by granting as much independence as one reasonable can to everyone else (anything that isn’t an unreasonable threat), and by maintaining conversations, and reaching consensus, on difficult issues – and that takes time.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Our Future, Technology, understanding and tagged , , , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s