I once was asked to kill a pig in a piggery.
I had fed that pig many times, and it had seen many other people come in with guns, but I had never been in with a gun.
As I walked towards it with a gun in my hand, and looked it in the eyes, it started to squeal – more like a scream.
I killed it, because I promised the owner that I would, and I have seriously wondered ever since just how aware of its own mortality that animal was. I strongly suspect that it knew it was going to die and it didn’t want to die.
I have been a hunter and a fisherman and a zoologist and a life-long sceptic.
Most animals I have killed I have been very confident that they had no idea what was coming (way over 99%). That one however, I am way over 90% confident it knew.
The neuroscience is clear, our brains have more pattern recognisers, but there is no particular difference between us and other mammals. In some circumstances, they can develop awareness. Just probabilities.
We have no definitive way at present to establish either sentience or sapience.
We have the Turing test – and the famous (in AI circles) bet between Mitch Kapor and Ray Kurzweil, that a machine will have passed it by 2029 (Ray for, Mitch against). http://longbets.org/1/
We have some fairly strong ideas of the general classes of processes which contribute to the emergence of awareness, but much less knowledge of the detail of the algorithms and the subtlety of their interaction. Minsky probably has the greatest awareness of the general classes of systems necessary, and Kurzweil probably has the greatest knowledge of specific algorithms involved.
I’m just an intuitive generalist who has known how to make an AI since 1974, but has been confident for the same period that doing so would be a very stupid thing to do given the state of human organisational structures (politics, ethics, morality, and culture more generally).
And it seems to me that we are starting to wake from our slumber.