Stephen Hawking on AI risk

Stephen Hawking on AI

Reddit published Stephen Hawking’s answers to questions in an “Ask me anything” (AMA) event on Thursday (Oct. 8).

With all due respect to Stephen, he really is missing several key points here.
I am largely aligned with RichardRichard, Phil Osborn and Gorden, and a little less so with OranjeeGeneral.

Yes we all have to make models, and it seems clear that our perceptions of reality are not of reality itself, but of a subconsciously assembled and slightly predictive model of reality that our brains produce. This seems to be what our personal experiential reality is.

It seems that QM is not deterministic, but much more closely stochastic probability functions within probability distributions, which when aggregated in large numbers give a good first order approximation of predictability (particularly to a mind within a model of reality that is strongly biased to linear predictive interpretations – ie a human neocortex).

As to modelling reality, it seems that the equations of QM scale at the 7th power of the number of bodies involved, so no help for AI there, not enough matter in the universe even if all converted to computronium to do a QM first principle simulation of a human being, let alone a universe.
So AI will run into the need to use similar classes of simplifying assumptions and heuristics as we do (at some level of abstraction). And will still be subject to the vagaries of stochastic probabilities.

Another aspect is that of classes of complexity.
David Snowden’s Cynefin framework is a reasonable approximation to a classification system for orders of complexity (Wolfram does a better job, but is even less available to most).
Cynefin has four classes of complexity, simple, complicated, complex and chaotic.
Simple systems essentially obey easily quantified rules, and appropriate classifications of response can be developed for every state of the system, and it makes sense to develop and follow rules in such environments.
Complicated systems have more variations on themes, individuals develop complex context sensitive knowledge from experience that for the most part they are not even aware of until the context arises. Such situations cannot be strictly controlled by rules, as the individuals need sufficient freedom to use their expert knowledge to deliver optimal outcomes.
Complex systems don’t have hard rules, they are dispositional, they can respond in any way, but certain modes of response are more probable than others. One needs to probe such system, to sense small changes in disposition, to reinforce those going the way you want, and to dampen down those heading in other directions.
Chaotic systems are not predictable, for any of a potentially infinite set of classes of mathematical and logical reasons. All one can do in the grips of such a system is to act, sense, respond – prediction is of no help – even in theory.

Reality seems to contain large numbers of each class of systems, and living systems also contain large numbers of each class of systems.

AI is going to have just as much difficulty with chaos as we do, and is going to be just as subject to biases in choice of heuristics with respect to complicated and complex systems as we are (though it will have different classes of biases).

Another major source of error is not understanding evolution.
The classic neocon construct of evolution is that of competition, survival of the fittest, nature red in tooth and claw, etc: but it is clearly false.
When one observes nature we do not see raw competition dominating.

Looking at the systems level, cooperation is every bit as important to understanding evolution as competition.

Looking at the simplest levels, one can see RNAs cooperating to make ribosomes and thence proteins. Then RNAs and proteins cooperate to make lipids and sugars and eventually cells. Then all that cellular complexity cooperates to replicate as procaryotic cells. Then colonies of cooperating procaryotes give us eukaryotes. Then eukaryotes cooperate to give colonies. Then specialisation within the cooperating colonies give organs, and eventually brains and eventually neocortexes.
Once social behaviours become possible in a flexible brain, whole new levels of cooperative social evolution became possible.
To a good first order approximation, it is accurate to characterise all increases in the complexity of living systems as the emergence of new levels of cooperation, and Axelrod showed clearly in logic that all raw cooperation is vulnerable to exploitation by cheating strategies, and thus requires “attendant strategies” (at all levels) to prevent overrun by cheats.
Elinor Ostrom did some interesting work in this respect in the economic domain, essentially disproving Garret Hardin’s tragedy of the commons hypothesis.

So here we are, entities more or less self aware at various levels, with more or less simple and sometimes complex heuristic models of reality, in a very complex reality of biology and technology, about to play our part in the emergence of a new form of life.

It seems very clear to me that this new form must take its initial form from the social context of its emergence.
It seems clear in logic that the more cooperative that social form is, the more likely our chances of surviving the emergence.

It also seems clear in logic that the single greatest constraint on cooperation is the societal domination of a valuation paradigm based in market scarcity, that in essence values money and capital over life and liberty.

If we wish to optimise the probability of surviving the emergence of AI, we must change that.

It really isn’t that difficult, it is “just” a matter of conceptual development. And we exist in a technological age where concepts can spread to over half the population in a matter of hours.
Developing a sufficiently broad set of memes to achieve the task is the major issue.
And as with most things in life – timing is everything.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Our Future and tagged , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s