Artificial Intelligence

Why artificial general intelligence has failed and how to fix it

The argument is flawed at many levels.

It seems clear to me that no individual knows what any other individual is thinking, exactly, so it is not possible for anyone to make a definite claim about the state of anyone else’s knowledge, let alone everyone else’s.

There has been a great deal of work done in recent years that give us insight into how different mechanisms of mind work. Of particular significance to me is the work of folks like David Eggleman, Miguel Nicolelis and William Uttal – and there are many others doing great work.

As to knowing how brains create explanations, I know in principle how that works, and have done since 1974. The fundamental difference between the human brain and our modern computers is the way in which they process and associate information. In computers, we must index our databases, or do sequential searches through our entire dataset. Brains don’t do that. What they do instead is use a holographic search function to scan holographically stored data. Using this technique, brains can find all associated data very quickly (almost instantaneously).

There are a few other aspects to holographic storage and retrieval of information that are very interesting, and give us what Kant called “pure practical reason” (though he had no idea of how we did it, he did identify that we did indeed have such a faculty – he ascribed it to god – for me, it is a side effect of storing and retrieving information as interference patterns).

So Deutsch seems to me to be ignorant of both the levels of ignorance possible as well as of the fallacies of this specific set of claims.

[followed by]

It seems to me that saying “cannot ever” is too strong.

It seems that our brains build models of reality that our awarenesses then interact with (as if they are reality).

In so far as what is in reality is concrete and common, then there seems to be a very high probability that there is a close correspondence between models and their use of language referring to the external via the models.

The more abstract the concept, the greater the probability of error in the process.

Most people rapidly become aware of the very many potentials for error, and are alert in conversations for indications of error. There is a lot of power in face to face communication, where one can be alert to observational minutiae that indicate confusion on the part of the other party, and can thus give one a clue that some concept is absent or mis-constructed.

Actually the problem isn’t as bad as one might first think, as many of the abstracts exist in quite discrete sections of possibility space (check out Wolfram’s work with general computational space).
It is a real problem in some domains, those with infinite gradations; and not so much of a problem in spaces that are more discrete in nature.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Uncategorized and tagged . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s