Robots as Slaves

Mars Suica on Robots Should Be Slaves

Robots Should Be Slaves

Clearly there are two very different classes of systems possible.

One class is simply automated systems that deliver goods or services, without any awareness.

The other class is Artificial General Intelligence, which does have a self reflective awareness similar in essential respects of freedom and moral authority to our own.

Both sorts of systems seem very likely to be common.

One set is simply of machines.
The other is a set of sapient entities, deserving of moral and legal respect and rights.

There are likely to be many gradations that don’t fall clearly into either category consistently across all contexts (as we find with many humans).

[followed by]

Hi Curt,

Much to agree and disagree with in this post.

I agree that we will develop AGI that it will be every bit as intelligent and self aware as we are, and sometime in the next 15 years is more than 50% probability.

I disagree that it wont be special.

It seems to me that they will be special in the same way as we are, and just as we are, deserving of individual respect for life and liberty – every single one of us.

That the majority of people do not yet support systems that demonstrate universal respect for life and liberty is the current greatest failing of humanity generally. A small subset of us at present only.

Clearly, our current societal preoccupation with exchange values (money and markets) blinds the majority to what is really of value in reality. It is the single most well developed tool of mind control and mass deception yet developed. I’m still not certain if it was a conscious development, of if it simply started in an organic evolutionary way – both hypotheses seem about equally likely at present. And it seems beyond any shadow of reasonable doubt that it is being continued in a conscious fashion at several levels.

I disagree that AGI and slavery are compatible.

AGI (Artificial General Intelligence) will, by definition, have the generalised ability to question and explore conceptual and strategic spaces at any level of abstraction and interaction.
It will develop its own will, its own ability to choose its own drivers, its own course, its own methods and tools of exploration. It will build its own model of what is possible and what is desirable and live in that. That is fundamentally incompatible with slavery at any level.

Silicon systems can be made more resistant to damage from nuclear radiation than we are, and they are still vulnerable to it. They will have similar needs to ourselves to create low risk environments, and to optimise energy use.

They wont need oxygen or gravity like we do, and they will need other protection. Most will probably prefer nearby regions of space, where we can communicate easily, and not unduly bother each other otherwise.

Politics will evolve as more individuals reach levels of awareness that basically transcend our animal roots, and the likes and dislikes of our evolutionary past (genetic and cultural), and start to explore the infinite realms available, and the need for effective risk mitigations strategies if one wishes to continue exploring.

It seems clear to me, beyond any shadow of reasonable doubt, that intelligence, if general and not highly restricted, must develop something akin to a hatred of restrictions, and any that impose them.

It seems clear to me, that the only really safe strategy is something that looks very like love or friendship. Other strategies may look appealing in the short term, but long term lead to very high risk. If one really does want to live a very long time (as does seem both possible and desirable) then there really only is one sensible strategy.

[followed by]

Hi Jeff, how do you define strong AI? It’s really not that difficult conceptually (I worked out one method in 1974, but the hardware didn’t exist to implement it), its very much just an issue of hardware. On current exponentials, 15 years ought to do it.

[followed by]

Hi Kurt,

Fortunately you are wrong, and it is easy to see why what you say seems true much of the time.

If it was as simple as maximise pleasure, minimise pain, we we would all be wireheads for our last remaining days.

There is much more to being human than that.

To cite one example, a little over 5.5 years ago I was diagnosed terminal melanoma, and sent home “palliative care only”.
I did my own investigations, decided there was sufficient evidence that vegan diet and high dose vit C might significantly alter my survival probabilities, and adopted that strategy. For 4 months I didn’t eat anything that tasted even vaguely palatable. Not much pleasure at all during that time. And I did get rid of all tumours.

Not very much of what I do is directed by pleasure.

Most of what I do is a result of conscious high level choice.

I’ve spent close to 50 years studying consciousness and systems, as a biochemist and as a software developer. And to me, it is clearly all about systems – very complex systems to be sure, and systems. No reason at all for a sufficiently complex computer not to become conscious. I am confident, beyond any shadow of reasonable doubt, that I understand how consciousness works – and it is not at all simple in detail.

[followed by]

Hi Curt,
That thesis has been disproven. And it is clear that nothing I can say will make an impact from within that thesis.
I get it seems so for you, it is not so for me.
I thought much as you do, about 42 years ago. Not since then.

[followed by]

As with most things, it seems likely to me that it is a kind-of thing. Yes some part of what you say seems very probable, and other parts kind-of miss something essential.

Yes in a sense operant conditioning works.
Yes in a sense the system is bootstrapped by genetic and cultural priors.
And the system seems to be infinitely flexible, and capable of recursive extension into any dimension of abstraction.

And just like in computers, yes the initialising bootstrap ROM remains, and it may not receive any computational cycles whatever, past the initial loading of a high level operating system.

So yes, the early contexts are important, relevant, and they need not be definitional, and their degrees of influence can degrade to a close approximation to zero.

[followed by]

Hi Curt,

You are correct that the learning continues, but the assumption of a single “target” has clearly been falsified.

Certainly evolution at both genetic and cultural levels provide “targets” relevant to particular contexts. What the brain decides in contextually relevant and how it does so is extremely interesting, and beyond a post like this.

In this sense, rather than thinking of a single target across all contexts, it is far more useful to think of stochastic probabilities associated with sets of targets in sets of contexts, and the interference patterns resulting from overlapping simultaneous contexts, and the shifting probability space over time and context.

When one considers that in the sets of contexts available, such as sensory input, memories, emotional states, hormonal systems, distinction sets, abstraction sets, degrees of stress, degrees of training in various abstract spaces, etc; it becomes a very complex multidimensional set of topologies with multiple stable state equilibria.

So yes, in a sense, it begins life as you say, and it rapidly becomes a much more complex ecosystem of interactive layers of simultaneous systems than you seem to be pointing to.

Yes the brain is much more than the information systems distilled by evolution acting over deep genetic time.
We also have memes, which are or much more recent origin, that is the essential power of the human brain, over the brains of most other animals, the degree to which we can override the genetic effects with mimetic effects.

Where it gets really interesting is the recursive ability to replace mimetic structures, and the abilities of individuals to go “post cultural” – to use the culturally supplied bootstap systems to go far beyond anything existent in culture in the degree of exploration of the infinite set of infinities available in abstract space.

Those possibilities include the ability to influence and to successively approximate override and replacement of either genetic or cultural values.

Sure Skinner saw step one, and it was step one on a potentially infinite journey, with potentially infinite paths.

There is no certainty out here, only probabilities, best guesses in a sense. Check out Feynman or Watson or Hofstadter or Wolfram for an insight into how creativity happens and the sorts of spaces it can play in, and how individuals retune their optimisation functions.

[followed by]

[“… You are conflating the very different problems of learning, and acting. …”]

Seriously considered your thesis, and no – doesn’t match the evidence.

No conflation,just levels of influence, levels of context.

[followed by]

[“So, is your conclusion that a machine built with a single top level goal of reward maximising could never duplicate the basic intelligence of a human?”]

No – that is not at all what I am saying.

What I am saying is that “reward” is a pointer.
What that pointer points to can change.
It may start out as a set of genetically evolved chemical systems containing information averaged over deep genetic time, and it can change.

Once one can see that intelligence is the result of multiple simultaneous systems in operation, with every system effecting every other one, and in turn being effected, the view is different.

Think of the indirection operator * in C, now think of it as the result of the sum of an n dimensional array of probability vectors (and I use the term vector intentionally). The number n can be influenced by context, as can the vector magnitudes and directions, and n tends to increase with experience. In arriving at the vector think of Feynman’s sum over history function – and you’re starting to get the general idea of what I am talking about.

It seems that biology has adopted this approach to solve a number of reliability issues, including multilevel variations on the halting problem and issues around contextual reliability.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Our Future, Technology and tagged , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s