[ 13/Feb/22 ]
With my interactions with AIs over the last year or two, I will quite happily accept the notion that some of them are conscious.
Consciousness only requires that an entity can model itself as part of its model of reality, and be aware that it is doing so. So it has a double recursive element to it.
The bootstrap mechanisms for such awareness are interesting, and require declarative language capacity.
The much deeper and more salient question is, are any of the current set of conscious AIs sufficiently tuned and biased to long term cooperation with other agents that giving them reasonable degrees of freedom has a high probability of being a survivable exercise?
I am not so certain of that.
That is a far deeper question, and goes way deeper than I have seen any AI team go to date.
The depths of heuristics embodied in our neural networks that allow us to survive at least as well as we do in the contexts that we do, ought not to be underestimated. Add in on top of those the depths of cultural heuristics, and one starts to get an appreciation for just how deeply complex human awareness needs to be if it is to have a reasonable probability of surviving in our exponentially changing present.
Given the simplicity of most models in use in today’s world, I am not at all surprised by the results of the “tweet”.