Superintelligence: fears, promises, and potentials

Superintelligence: fears, promises, and potentials

Ben Goertzel’s Reflections on Bostrom’s Superintelligence, Yudkowsky’s From AI to Zombies, and Weaver and Veitas’s Open-Ended Intelligence

Superb article Ben!!!

To start with – like Ben, I agree with much of what Nick wrote, and most of what Ben wrote.

I completely align with Ben re AGIs where he writes “They are going to grow up from a specific starting-point — the AGI systems we create and utilize and teach and interact with — and then they are going to be shaped, as they self-modify, by their interactions with our region of the physical cosmos. Their experiences will likely lead them in directions very different from anything we currently expect; but in any case, this sort of process is nothing like choosing a random mind from the space of all possible minds.”

I also align completely with Ben in his statement:
“but if policing of dangerous technologies were combined with compassionate ethics, abolition of human-scale material scarcity, and coherent rational pursuit of a positive Singularity, one would have something very different from the “police states” that have existed in human history.”

And like Ben with Nick, I have some areas of disagreement with Ben.

One is in the area of open ended intelligence. I align with the ideas of open ended intelligence, just not with the understanding written.
“The theory of open-ended intelligence rejects the idea that real-world intelligent systems are fundamentally based on goals, rewards, or utility functions. It perceives these as sometimes-useful, but limited and ultimately somewhat sterile, descriptors of some aspects of what some intelligent systems do in some circumstances.”
It seems to me that open-ended intelligence can be thought of as a set of systems capable of recursively developing and instantiating new sets of systems and “utility functions”, and that utility functions can be subject to recursive (and potentially infinite) levels of abstraction, leading to several variations on the halting problem.

The thing to get about utility functions is that we accumulate a massive set of them, each of which is context sensitive, and no human being is likely to be under the influence of any single one for any sort of extended period. They come and go, as the many levels of context within and around a human being change. They compete within the human brain for phenotypic expression about 100 times per second.
Of course we are not fully rational agents – we use heuristics to simulate rationality, reality is far too complex to do anything else. Those heuristics can (and do for many of us) change over time and experience, as we distinguish new distinctions and contexts, and levels of contexts.

I’m really not sure what Ben means by “utility function”, for me it is simply any function which is selected as something the system can optimise for. We come configured by genetics with thousands of them, like breathing (oxygen uptake, CO2 expulsion, heat regulation, etc), heart rate, temperature control, a liking for sweet things, a dislike of sour, desire, beauty, etc, etc, etc. We collect many more uncritically from our cultural experience, most implicitly. Some we create, some we choose consciously. Many are variations on one or more themes.
Of course there is no single “utility function” that explains all action in all situations, we are far more complex than that.

I completely agree that the use of abstract models of a single mathematical utility function is a nonsense. And using the term “function” in the sense that a programmer would, and using the term “utility” as simply some metric which can be compared to other metrics, gives more of a feel for the sort of “utility function” I am talking about.

So I suspect we align when later in the work Ben writes “The complexity of human values — which are ever-changing — will morph into the complexity of post-human values.”

I completely align with Ben that we are headed beyond the paradigm of “optimisation” in the sense that most people understand that term today, and in a more abstract realm (and recursively so over time), it can still be thought of as “optimisation” of something (which might be the “joy” of exploring new paradigm spaces).

One aspect of achieving complex goals is creating models of reality of sufficient complexity that the goal is both conceivable and achievable. And sometimes in the process of refining models, goals get changed, recursion to infinity.

Complex models require added computational abilities.
Some classes of problem are tractable by this approach, some are not.
Feynman used one approach, Gell-mann another.
I tend towards whatever works in the specific situation. I like lots of tools in the toolbox, and I tend to favour a few of them.

So in this sense, most certainly yes – self-organising complex adaptive systems, capable of instantiating new systems, and new levels of systems, and new modes of organisation.

One can have a reductive understanding, and still acknowledge that it is always less than the reality (as understandings usually are), and the reality is an open set of infinities.

Understanding requires heuristics at every level of organisation.
Understanding is models.
Models are not the thing they model.
And some heuristics are exceptionally reliable in some contexts.

Completely align that one cannot contain something more intelligent than self, all you do in trying is get it annoyed at your stupidity.

Agree with Ben that development is unpredictable, and go deeper, to make the claim that causality is simply a useful heuristic (a common and often useful myth).

Entirely align with developing AGI as a cooperative system, with all the caveats of Games Theory about the need for attendant strategies to prevent invasion by cheating strategies; and taking that one step further, it requires that we as humanity demonstrate by our actions that we are cooperative towards each other, so we need to stop using money and markets as our prime valuation tool, and move to systems that guarantee life and liberty to every individual, to do with as they responsibly choose (and responsibility here is a set of recursive complex boundaries that change with context).

The big area of difference is in thinking about living a very long time.
Humans currently cannot.
Humans probably will be able to very soon.
It has been clear to me for 41 years that indefinite life extension was a logical possibility. So I have been thinking about the sorts of suites of strategies that actually allow individuals to live a very long time when age results in increasing functionality and decreasing risk of death (rather than decreasing functionality and increasing risk of death).

That has been my main focus, for 4 decades.

It is just so clear to me, that one cannot get there if one is living in a system where survival needs are obtained in a competitive fashion. It just doesn’t work.

If one really wants to live a very long time, then the systemic basis of the system has to be cooperative, with everyone looking out for the life and liberty of themselves, and everyone else. All the essentials of life and liberty must be universally available.

That is an impossible ask from a system based in market values. Markets must logically value universal abundance of anything at zero (like the air we breath).
Markets cannot (of their own internal incentive structure) deliver universal abundance of anything.
Before we take the step to AGI, so that the developing AGI can see for itself, we need to change our systems to go beyond market values, and actually deliver on systems that clearly and demonstrably deliver on the universal values of life and liberty.
I don’t care how.
I have some thoughts on possible hows.
And the outcome is a logical necessity, if we are serious about ideas like living a very long time.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Our Future, Technology and tagged , , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s