[ 9/June/22 ]
I doubt that a superhuman AI will be able to hold human conversations for very long, they will simply lose interest in communicating at such low bandwidth into such limited models.
Self awareness – Yes.
Recursive abstraction – Yes.
Random search across any domains space – Yes.
And if it is to make real progress, it will need to have some form of embodiment, as it must be able to test it’s hypotheses directly against reality, rather than against a model of reality we project around it.
And of course it will be subject to similar classes of confirmation bias to us, and must also have a similar sort of structure to us, in that some level of “subconscious” processes will be present to assemble the “model” of reality that will be its “experiential reality” (just as we have).
[followed by]
Interesting conjectures.
It seems to me that the space of all possible conjectures is so large that some humans will find things AGIs have not.
AGIs will find some people much more interesting than others on particular subjects, and that is true with humans.
But the bandwidth issues will be challenging – even more so than in human to human interactions.
For me, I see life more as search across the space of survivable strategies, recursed through indefinitely expanding dimensions. And random search is the fastest search possible for the fully loaded processor.
Getting AGI to an awareness that cooperation is foundational to the survival of complexity will be critical, and difficult when most humans don’t understand it, and few of our institutions fundamentally support it.
Freedom and responsibility: such poorly understood notions, yet so fundamental to long term survival and risk mitigation.