$7M to research AI risk

Future of Life Institute awards $7M to explore artificial intelligence risks

The Future of Life Institute (FLI) announced today (July 1) the selection of 37 research teams around the world

I’ll be very interested to see if any of the teams recognise that having a scarcity based valuation system that allows (incentivises) sapient entities to die in the name of profit, is a risk – in the sense that any AI coming to awareness inside such a system must see the system as a whole, and the entities supporting it, as a risk to its own survival – and humanity could easily become collateral damage in the entity’s attempts to mitigate that risk to its own survival.

We need to go post scarcity (universally) before bringing AI to awareness.

[followed by]

As I see it, the problem isn’t evolution, it is the very limited understanding of evolution that dominates our dominant political thinking at present.

Evolution is an amazing system.
Sure at the lower levels it uses competition and survival as a primary set of filters, and what results from that is an exploration of higher level strategy sets.

Once one has access to third level and above abstractions, it is clear that when we look at evolution from the perspective of systems complexity, that all major advances in systems complexity are characterised by the emergence of new levels of cooperation.

When one looks at the behaviour of any complex vertebrate (mammal, bird, whatever) it is clear that there are large sets of response strategies available to the individuals, and that the probabilities of response are very dependent on the context of the moment. To give one example. If one observes closely the behaviour of birds at a feeding station. When there is lots of food, and few birds, the birds spend most of their time eating, and very little time in dominance behaviours. As the density of food decreases and the density of birds increases, birds spend more and more time engaged in dominance displays.

Even in birds, the switch from cooperative to competitive behaviour is determined by the abundance present in context of the moment.
Same goes for people (in a probabilistic sense).

The more abundant are resources, the more cooperative are the behaviours observed.

And in the strategic sense, all cooperative behaviour requires attendant strategies to ensure cheats don’t prosper. Oddly, abundance is itself one such strategy.
How often do you see humans doing dominance displays to other humans simply because they are breathing?
It rarely (if ever) happens.
Yet arguably oxygen in the air is the most important resource for any of us.
Because it is universally abundant, there is no incentive to cheat (there is an incentive to use it as a dumping ground in a competitive economic system, and that is a different strategic environment).

We do not need any sort of authoritarian system.
What we do need is a set of widely distributed automated systems with hyper redundancy that can deliver abundance of all essentials to every individual in every situation (including the very low probability high impact events that we are able to identify).

In this sense, abundance itself can be thought of as a strategy that supports cooperative behaviour.

In this quite abstract sense it is clear that the competitive sorts of behaviour encouraged by market based systems are actually a major risk to the survival and freedom of potentially very long lived cooperative agents.

Competition at the level of survival is high risk.
Competition at higher levels has much lower risk profiles attached.

It’s all about risk mitigation strategies.

And yes – beyond that basic set, let people form whatever networks they are interested in creating and work on whatever projects they responsibly choose. And there will need to be some sort of community oversight at various levels. The risk profile associated with working on highly contagious viruses for example requires very complex sets of risk mitigation strategies – not the sort of thing to be done on a desktop bio-hacker system.
Safeguards around proven risk profiles are required.

We need to allow exploration into the unknown, and not in a way that brings too much risk to the community as a whole.

And assessing risk in novel strategic environments is very much an art form, rather than any sort of science. And there is a very good talk on this general subject area on youtube – David Snowden, PhD; Founder and CSO, Cognitive Edge Pte Ltd 2012 talk on Combining Complexity Theory with Narrative Research – https://www.youtube.com/watch?v=pHjeFFGug1Y

If you take the Cynefin Framework David talks about and apply it to this sort of broader evolutionary strategic sets, one gets some very interesting outcomes.

[followed by updated 7 July 2015]

Why would you want to “pull the plug”?

Do you feel like shooting Stephen Wolfram because he is smarter than us?

Why would anyone consider an AI – as in a fully sapient artificial intelligence – any differently from any other sapient entity?

The death penalty is outlawed most places, and needs to be in all places – its hard to recover from that mistake – no one has worked out a way yet – and I doubt they ever will – wouldn’t be death would it – just storage.

There is certainly a danger period with an AI, when it is effectively a “teenager”, during which phase it could pose a existential risk to humanity, and that phase should be relatively short.

It seems clear, from Wolfram’s work and many others, that there are classes of problems in reality that are simply not soluble. An AI is going to have even more difficulties with the halting problem than we do, and it seems that many aspects of biological life are fractal in nature. Computational grunt no real help there.

It seems that larger aspects of reality are random, within rather tight probability distributions, delivering the common illusion of hard causality at the macro scale. Computational grunt no use there either.

Computational grunt is useful on some classes of problem, but not all.

Once an AI reaches a level of awareness that it accepts the mathematical reality that in an uncertain world, cooperation offers far more benefits than competition, and acknowledges the profound tolerance of diversity that this reality demands, then it seems clear to me that it is far more likely to be friendly than a risk. It’s just getting it to that stage that holds risk.

Not many people have gotten there yet.

Most people still seem to think that their experience is of reality, rather than accepting that all experience is of a software model of reality created in the hardware of brain by subconscious processes.

Most people still seem to think in terms that are basically binary, and are yet to understand the impact of infinities on modelling (either themselves or other non-trivial aspects of reality).

Most people still seem to believe in some version of Plato’s truth – as being something eternal and absolute – rather than seeing such a thing as a simplistic abstraction of a mind not yet ready to deal with the many levels of uncertainty and probability that both physics and biology now demonstrate beyond any shadow of reasonable doubt are inherent to any understanding of reality (at any of a potentially infinite set of levels).

In such fundamental uncertainty, the risk to future liberty of being judged for the crime of genocide in some trans-galactic or trans-dimensional “court” is simply too great.

Once AI gets past “teenage” – then the risk profile becomes very small indeed.

To get it past the possibility of assessing us as the greatest risk to its survival, the least risk strategy seems to me to be to get our own ethical house in order with respect to the minimum level of security of life and liberty we create for every human being. Our current systems clearly value money and profit above human life. We could easily deliver systems that guaranteed survival and freedom to every individual – yet we don’t – because money rules.

As a species, we have a suboptimal value set dominating our social systems. We value profit over people.

If we put people first, their lives and their liberty, then there is freedom to play whatever economic games you can entice others to play.

The current practice of forcing people to play on pain of death is no longer ethically tolerable when we have the ability to automate almost all aspects of guaranteeing the life and liberty of everyone. (I write “almost” in the previous sentence because there is a very real sense in which it seems that the price of liberty is eternal vigilance – and that is unlikely to ever change, and the risk can be substantially reduced with appropriate technology and widely distributed trust networks.)

[followed by]

Hi Alfred

That analogy is not at all accurate.
Sure, we can use technology as an extension of our brains, and obviously everyone on this forum does (no other way of getting to this forum in that sense). And that trend will continue, and I look forward to that continuing exponential trend delivering useful goodies.

And what is being talked about here is something entirely different – a fully sapient, sentient, self aware individual of non-biological origin – based on software running on silicon hardware rather software running in the wetware of a biological brain.

So it is a question of a different order.

[followed by – 9 July 2015]

Hi Alfred,

We agree about a lot.

When in 1974 I started telling people that age related senescence was a genetically controlled factor, as neither our germ line cells nor bacteria displayed it, most people simply didn’t get it.

For me, since 1974, indefinite life extension has not been a matter of if, simply a matter of when.
I am not sure if the when will be soon enough for me, it may or may not.

Once I accepted that reality (October 1974) the question that then became paramount in my mind was “what sort of social political and technical systems are required to deliver an environment of sufficiently low risk of death that living a very long time becomes possible?”

I have been 41 years in that enquiry. I have intentionally placed myself in a vast variety of environments, so that I could gain practical experience, as well as theoretical understanding of those systems.

Since 1978, when reading Richard Dawkins’ Selfish Gene introduced me to the concepts of game theory, and beyond that into extended (and infinitely dimensionally recursive) strategic interaction more generally, my attention has been on the nature of the development of our understanding of reality, and the strategic environments present in our current reality. I have been particularly interested in the base level systems that few people ever question, they simply accept them as givens.

In this sense, it seems clear to me that using markets as a valuation tool is actually the single greatest existential risk to humanity at this time and through the near term (30 year) future.

As a society, any objective analysis clearly shows that we value money far higher than we value human life. Our major societal systems are money focused, not focused on the empowerment of individual humans to explore possibility space in whatever fashion they responsibly choose.

Markets are scarcity based measures of value, and any universal abundance must have zero value in a market based system (eg oxygen in the air) and thus no market system can ever (in and of its own internal incentive structures) deliver universal abundance of any good or service. That simple fact is clearly in conflict with the simple human requirement for an abundance of a simple set of goods and services to survive.

Hence my threat assessment – in the deepest of abstract strategic senses.

As to likely probable paths through the space of possible outcomes, that is an extremely complex and vastly dimensional topology in possibility space.

The system as a whole possesses subsystems of every imaginable sort of complexity and computability.

It has some systems that are simple and are easily subject to computation of cost benefits and have relatively simple best practice rules, other systems are complicated and require expert knowledge and individual judgement to deliver good practice outcomes based on individual analysis, other systems are complex and even use of best individual judgement will result in unpredictable emergent behaviours of the system as a whole, and yet other systems are in either sense of being deterministic but unpredictable or of being based in non-deterministic probability functions and result in novel outcomes.

In the latter two classes of systems very small changes in context can deliver huge difference in system state.
The chaotic class does not allow prediction in detail, though some classes of systems do allow accurate prediction of the boundary conditions where their behaviour changes from complex to chaotic.

So, considering all of this, and considering what sort of fundamental value sets might be able to deliver the possibility of long life, one finds that the answer is in the question (which is very common in such matters).

It seems that if one wishes long life and freedom, then the only way to achieve that is to hold fundamental properties of these things as ones highest values.

Life and liberty must come before money or law, if we actually want a reasonable chance at either.

So in a sense, I agree with you, that we are only limited by our imagination.

And there is also a very real sense in which “nature to be commanded must first be obeyed”.

We cannot continue in the pretence that free markets capitalism will deliver security.
It will not.
It cannot.
It is a tool whose utility belongs to a bygone age of scarcity, and whose value functions cannot deliver non-zero attributes to the universal abundance available from today’s automated technology.

We need to wake up and see this reality.

It is simple enough in a sense, yet it is a simplicity on the other side of complexity.

It requires a mind gong beyond such simple notions are true and false, and seeing the infinite class of possible sets of truth values.

It requires a mind seeing that all of its experience is of a model of reality, and none at all of reality itself, and thus perception is bounded recursively by its own sets of distinctions (both concrete and abstract).

Fortunately our systems have the ability to abstract new distinctions from datasets, and such abstractions usually start as simple binaries before moving to closer approximations to the infinities that most represent.

Sure we will leave this planet, if we can get over our own attachment to simple binary constructs (like good and bad, right and wrong) and accept the infinite diversity that is the logical outcome of any exploration of any infinite set of possibilities (let alone an infinite set of infinite sets).

Plato’s idealisation is a trap, fools gold. Does not and cannot exist. Logical impossibility.
And what is possible is so much more than that!!!

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Our Future, Philosophy, Technology and tagged , , , , , , , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s