AI Podcast – Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence

Vladimir Vapnik: Predicates, Invariants, and the Essence of Intelligence | AI Podcast

[ 15/2/20 ]

Thank you Lex for another beautiful experience.

For me, the ideas of weak and strong convergence on functions over Hilbert space are a good way of thinking about evolution generally. And both of your ideas are important.
Evolution has worked on both, in an involuntary sort of way over both classes of “spaces” (physical and function, recurs as deeply as one is able).

Early on in evolutionary history, the physical space in which replication could occur was small, as was the space of possible replicators and strategies (functions).

Natural selection has increased both levels of “space”.

By the time our particular evolutionary sequence gave rise to language and to the conceptual spaces that the exploration of the space of all possible symbols and logics and topologies made possible, we were already very complex machines, with huge sets of functions (sorted, selected, and optimised to some degree) for both weak and strong convergence over the sets of spaces experienced by our ancestors (all of them, simultaneously, over multi-generational time-spans).

So depending upon how you view them, you can call them functions or predicates or heuristics, they can be any and all, to different degrees in different contexts.
There does not appear to be any limit to the depth of context – it seems to be infinitely extensible.
Functions and predicates that worked in one set of spaces will not necessarily perform well in another set of spaces, and they are often a good place to start.
If a processor is fully loaded, then random search is the most efficient possible search algorithm (all indexing forms use more processor cycles).

So if one is pushing the space of spaces (as Wolfram does with NKS) and one uses the functions that Yudkowski explored in AI to Zombies as generally useful approaches to limiting the function space; and one uses as abstract a set of representations possible that still retain useful mapping to what science seems to be indicating actually exists (which includes QM and GR and biochemistry and psychology and culture …), then the picture is very clearly that these ideas are both critical; and the strategy space of evolution is fundamental to understanding.

One has to get that all new levels of complexity in evolved systems are predicated on cooperation, and that rapidly gets mind numbingly complex at every level.
Our current societal preoccupation with the very simple idea of evolution being competition is actually imposing existential level risk upon all of us. AI in a competitive context is not survivable.

AI in a cooperative context is necessary for our long term survival (it is the only access possible to solution spaces for a large class of already well characterised existential level risks).

More people have got to start seeing these twin realities.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in understanding and tagged , , , , , , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s