Ethics and AI

Enabling the ethical development of strong AI

What is ethics?
What is morality?

The Greeks defined virtue as the mean between the vices of excess and deficiency, which is kinda cute but just moves the problem out to the definition of excess and deficiency.

It seems that any deep analysis of ethics resolves down to a set of values, a set of acceptable risk profiles, and a set of modelling and assessment tools (with associated uncertainty profiles).

It seems that if a sapient agent can be said to have choice, then it must have choice in the setting of all three domains above – value sets, acceptable risk profiles, and modelling and assessment tools. Thus it seems clear to me that for all free agents, morality is a choice, and cannot be imposed. Right and wrong seem to be very simplistic (childish) approximations to ever expanding (spatio-temporal) spheres of influence and ever decreasing magnitudes, of the impacts of every choice we make, every action we take.

Certainly, we can demonstrate datasets and inferred lessons from the past, and offer tools (models, results of simulation runs, etc) and ultimately it comes back to choice.

It seems clear to me that the most powerful thing humanity can do in respect of the emergence of AI, is to get our own “ethical house” in order. We need to be able to demonstrate by example that we do not pose a significant risk. We need to demonstrate by our societal systems that we value all sapient life (pose a low risk to any sapient entity).

If we fail in that – we pose a significant risk, perhaps unacceptable.

Posing a low risk to sapience means having systems that supply an abundance of all the essential needs of any sapient entity.

Market based capitalism does not support universal abundance, it is founded in scarcity and only works if scarcity is present. Just look at all the artificial scarcities we are creating in the realm of information – IP laws, copyright, patent to try and keep the system working. They make sense only within a scarcity based (market based) mindset.

If we wish to present a low risk profile to an emergent AI, then we have no choice but to transcend our current market based valuation paradigms, and move to technologically empowered abundance based paradigms that value sapience and freedom.

Anything else is a high risk strategy.

And my personal journey over the last 5 years has been interesting. Having been told 5 years ago that I could be dead in 6 weeks, and had a less than 2% probability of surviving 2 years was a profound event. I did my own investigations, trialled many things, and settled on a dietary regime that seems to work. All metastatic tumours in my lymph system and liver have gone, and I have been tumour free for 4 years. It involved massive change, and involved giving up all my favourite foods, and retraining my taste preferences (which took 4 months before anything I ate tasted anything less than foul). It has amazed me since how many people would rather die than do that. Very few people seem willing to trust their own judgement and stand completely outside social agreement (or the evolutionarily imposed preferences of biology and culture) – the vast majority would rather die than keep up a self imposed discipline, and some would rather die than even try it for a couple of weeks.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see
This entry was posted in Our Future, Technology and tagged , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s