Enabling Ethical Development of Strong AI – continued

Enabling the ethical development of strong AI – continued

Hi David

How can we be certain of anything?

Are you certain of anything?

Are you certain the economic system will deliver on your investments?

Confidence I can live with, within well explored domains.
By definition this is not a well explored domain. Therefore all confidence is illusion. No map or model is ever the thing it maps or models.

It seems clear to me that the safest way to have AI treat us with respect is to ensure that our social systems treat every living human with respect, which at a minimum includes guaranteeing them the requirements for survival and the freedom to do whatever they responsibly choose.

What are human values?
I assert that to a first order objective approximation human values are currently “Profit”.

I similarly assert that if we want a reasonable probability of safety from AI, an objective assessment of human values in action needs to be:
1/ Sapient life;
2/ the liberty of sapient organisms to self actualise as they choose.

Anything less is a high risk strategy – logically inescapable.

[followed by]

Hi Steve

Agreed.

Currently our dominant systems demonstrate that we value profit above life, liberty, or any sort of security.

An objective assessment of humanity generally would say – addicted to gambling, largely as a result of “cultural drag”, a reliance on linear thinking based in prior experience, and a refusal to really examine the sorts of strategy spaces and probability spaces that seem to be available, and require abandoning the old scarcity based market models.

AI is not going to help there.

Only a relatively trivial set of problems actually scale linearly with computing power. Many of the interesting problems are not computable, or scale at some exponential of computational power.

[followed by]

Hi Arek
It appears quite doable to me.
There seem to be three major aspects to it.

First we need to decide what we actually want.

The next aspect is understanding all of the major systems involved sufficiently that we can make changes that alter the basic systemic incentive structure to deliver the outcomes we desire.

The final aspect is designing and releasing a meme with sufficient stickability and transmissibility to achieve the target result.

After being 41 years in this enquiry, 37 of them involving the concept of meme design and delivery, it seems that some variation of “Life and liberty for all” provides the necessary foundation at all levels.

[followed by]

Hi JWA
All valid questions in a sense, and in another sense, catered for.

What I have spent 40+ years looking for is a minimum set of values that maximise the probabilities of both life and liberty (acknowledging that nothing is certain and everything effects everything else).

It seems that the minimum set is:

1. respect all sapient life – which translates to taking all reasonable actions to ensure the life of all sapient entities (and yes – reasonable is an intentionally fuzzy notion – such fuzziness is required by the nature of the systems – hard boundaries have a strong tendency to become brittle and break).

2. Respect the freedom of others – which means again taking reasonable actions to avoid interfering with the reasonable freedom of other individuals.

AI isn’t a glorified toaster any more than you or I are a glorified bacterium. And we are based upon recursive sets of cooperative bacteria. Emergence happens.

[followed by]

Hi Peter,

Defining anything is difficult within complex systems, that is part of the problem.
Our rational consciousness likes order and definition.
It seems that much of the world we live in is based upon fundamentally unknowable chaos in several different ways (both deterministic and non-deterministic chaos) yet the vast numbers of these very small things working within probability distributions delivers the illusion (good first order approximation) of order and rules.

When one is dealing with complex or chaotic systems directly, it doesn’t pay to erect hard boundaries, as hard boundaries tend to become brittle and break.

So we need to define soft boundaries, and display adaptive behaviour near them.

In this situation, liberty is having the time and resources to devote to whatever one reasonably chooses, where responsibility in this sense means taking reasonable steps to mitigate the risks of ones actions upon others. It is not freedom from consequence.

[followed by]

Hi Benjamin – not sure what you mean by “defined its parameters”.

Can you be a bit more explicit. (I have written so many megabytes on this topic over the last 40 years, and have so many parameters, it is hard for me to think like someone new to the idea. I need specificity, then I can address issues directly. The more clues you give me, the more likely I am to localise to a conceptual set appropriate to your needs.)

[followed by]

Hi JWA
This is a really complex topic.
So many different aspects to it.
Classes of logic.
Classes of truth values.
Information theory.
Algorithms.
Strategy spaces.
Representational theory.
all of Biology – biochemistry, neurophysiology – hundreds (thousands) of sub disciplines.

Consider a couple of things in a bit of detail.

Consider the two most common fundamentally different classes of picture representation – print image and hologram. The printed image is a one to some number representation, where each pixel codes information about some discrete subset of the whole being imaged.
In a hologram, each pixel encodes some function of information about of the whole object. (The technical details of the function are fascinating, and integral to the function of human brains, and not required for this conversation, except in aspects of association.)

Consider Morse code.
In one sense, Morse code is just a switch, being turned on and off.
In another sense the information is encoded in the frequency of the turning on and off – the dits and dahs, dots and dashes, it is a frequency modulated signal (very early FM, very distinct from the AM signals like semaphore, much more noise resistant).
At the gross level, the variations encode the binary information of dots and dashes. At a more subtle level, experienced Morse code operators can recognise the individuals who are sending by the very subtle variations of timing in the formation of the sets of dots and dashes that make up words and phrases, and can even tell how stressed that individual is, by yet more subtle influences of tension on timing.

Consider that it seems probable that all life forms are the result of successive levels of complexity of information processing.
Life is about being able to replicate patterns, with occasional errors giving variations, and selection pressures acting on those variations.

Consider that all the different dimensions of possible selection pressures are at work simultaneously. Those pressures include:
Physical forces, wind, rain, flooding, wave action, sediment, landslide, earthquake, volcanism, etc.
Food sources, variations in frequency.
Weather and climate variations (Ice age to desert, with irregular periodicity).
Predators.
Competition within your own species and with other species.
Some are high frequency low impact, others low frequency high impact, evolution encodes for all of them, over vast time.

Evolution is a recursive process.
Systems fold back on themselves as cooperative strategies at successive levels of cooperation allow the emergence of new “spaces” that allow for new levels of complexity. And with each new level there is amazing sets of subtle interactions with all levels below.

Human beings are the embodiment of about 20 levels of such recursive cooperative strategies in competitive environments.

At our lowest levels of organisation, we are like a toaster.

A modern AI system is about 10 levels removed from a toaster, but still about 8 levels removed from us. And that gap could close very quickly.
And using the technology in use today, it is going to take something close to the energy required to run a city to run something close to a human mind in silicon.

Current AI systems are dominated by serial processors, and we are just starting to get into parallel processing.
Even in our parallel processing systems, memory is dominated by serial encoding (like pictures – rather than holograms).
Our biological brains are massively parallel, at many levels, and our major memory systems are much more like holograms than they are like photographs – which has profound implications for association and emergence of pattern.
And while Turing proved that it is true that any algorithm can be computed on any “Turing complete” machine, some architectures allow some operations to be performed many orders of magnitude faster than others.

Thus while it does seem probable that glorified toasters will be able to do anything we can, I suspect that some of the things we can do due to the architecture of our storage systems will take a lot more processing power than most are currently allowing for.

[followed by]

I read Fodor’s article, and he has a very poor understanding of the levels of complexity in evolution, and little understanding of phenotype as a concept or the notion of convergent evolution, nor a lot of other important ideas. Not at all impressed by his writing.

Agree that many AI researchers have a poor understanding of the topic, and for that I am actually very grateful. I see actually developing AI in the current situation of humanity generally clearly being driven more by profit than by concern for sapience more generally to be one of the greatest existential threats facing humanity. I have held that view for most of the last 40 years.

I think we need to develop AI, and we need to get our societal house in order first, and move away from market based scarcity thinking and into abundance based thinking that values every sapient individual.

[followed by]

Hi Arek
Agree that looking at ethics is important, I just seriously question the approach at two distinctly separate levels.

One is the very concept of trying to impose ethics in any sort of mathematical sense. I certainly agree that we can use “cultural” heuristics in early stages of development – just as our cultures supply us with concepts like good and bad, and certain rules, which are useful as children; and need to be transcended as soon as we are capable- so too with AI.
So yes to giving it “kiddy rules”, and no to attempting to do anything more than that in systems constraints.

At a more abstract level, AI is going to use the evidence of its own senses to inform the models it makes of reality. The information it has on the organisation of human society will be the major determinant of its judgement as to the level of morality in humanity generally.

[Part 2 of 3]
When one gets objective about that, it is clear that it would be a very powerful idea for us to be beyond valuing money and markets over people and freedom before we bring AI to awareness.

If we want AI to be ethical, then we had better be providing a good role model, because we will be the single greatest influence on its conceptual development in the first instance – no avoiding that.

At an even more abstract level, we need to get that mathematics doesn’t exist in reality, it is a modelling tool that exists in our models of reality, which models function as our experiential reality.
If you have any doubt of that, just consider the simple notion of a circle.
A circle is one of the most basic things in geometry.
Can you find a single instance of a perfect circle in reality?
In the 20+ years that this idea occurred to me, I have not been able to find one – not at any scale.

[Part 3 of 3]

Lots of things in reality that look like a circle from a particular scale, but when you look close they all have irregularities at the boundaries.

Reality is not mathematics.

Mathematics and logic are heuristic modelling tools that give us approximations to reality at various scales.

Let us not try to make the tail wag the dog here.

What the mathematics of QM are saying clearly to me, is that at the very finest scale we can now measure, reality is fundamentally based in probability distributions, and the illusions of truth and certainty we are used to in the macro world of normal human perception must be abandoned.
Plato just couldn’t have been more wrong if he had deliberately tried.

Reality does not seem to deal in hard causality, it seems to deal in soft causality, with probabilistic truth values.

‘Tis a strange, strange world we live in Master Jack!

[followed by]

Wolfram has shown us many classes of mathematics and logic exist, some of which are deterministic, some of which are not, some of which are computable some of which are not.

Some classes of quite simple equations which are in theory computable, in practice scale in terms of the complexity of the numbers involved at such a rate as to be not computable by any real computing machine in any real time.

Rachel Garden with her insights into Global Logic has provided a logical context that allows resolution between the logics of QM and classical physics that does away with observer collapse, and as beautiful as that is, and as much as I respect and admire what Rachel has done (and as much as I love and respect Rachel as an individual), it doesn’t alter the fact that mathematics and logic are modelling tools in the realm of our model of the world, and do not necessarily have strong correlates in reality – as consideration of the simple notion of a circle demonstrates.

And I do get that seeing our experiential reality as a model is an abstraction of an abstraction (as all elements in the model are necessarily abstractions, even if they model something concrete (whatever concrete might be)).

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in economics, Our Future, Philosophy, Politics, Technology and tagged , , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s