Some things in life cannot be offset by a mere net gain in intelligence.
In the article above Susan writes “any failure to be charitable to AI may come back to haunt us, as they may treat us as we treated them” – which I say is very bit as applicable to how we treat people now (each and every one of them), as it is to how we treat AIs, and both are extremely important.
Susan refers to a paper below in her comment to OranjeeGeneral ““Philosophy of Mind in the 21st Century,”,with Pete Mandik (p. 13) (at SchneiderWebsite.com).” I could not find any paper of that name on that site, but I did find one something similar called “How Philosophy of Mind can Shape the Future”.
On Page 15 of that paper Susan states “There’s an ethical element to this problem, for we recognize an ethical imperative not to inflict avoidable suffering upon any being, whether they be natural or artificial.”
It seems very clear to me that in as much as any ethical imperative can be said to exist, then there is a strict hierarchy which starts with life, and then freedom, and all else is subservient to these two.
Suffering is always a matter of interpretation. Pain is possible. Suffering is how we deal with pain.
Having trained myself to dive deep, and having survived a terminal cancer diagnosis (of a very aggressive and very painful form of melanoma), I have some familiarity and practical experience of these things.
I agree that one should not inflict pain on anyone if it can be avoided, and sometimes pain is difficult to avoid.
It is possible to simply let pain be what it is, and what it is not, with no more meaning or significance than that – and such treatment of pain is rare.
I strongly doubt that there is anything fundamental about consciousness that cannot be duplicated to an indistinguishable approximation in silicon. I have strong doubts that many people have much idea at all as to just how complex are the very many levels of interaction within biological systems. Being human is far more than synapses, and synapses are a critical part of the puzzle.
So I suspect Ray is out by at least 2 (and perhaps 5) orders of magnitude in his estimations of the complexity involved in being human, and in a sense, that does make much difference, just a few years – 40 at max, and probably far fewer.
And being human does not require that we be slaves to any level of genetic or cultural impulse or desire.
We can alter any (and all) of these.
And what happens to a human when they disconnect from such impulses and desires can be so alienating, that communication with any other becomes almost impossible, as the framework of common assumptions no longer applies.
Such an experience seems possible recursively, potentially infinitely.
Am I less human for having such experiences?
When one takes the further step of seeing logic as simply one possible emergent form from a stochastic system, the complexity of the situation alters by new orders of magnitude.
The issues with AI are profound.
The issues with being human are profound.
The naive view that computational ability can solve all problems is obviously false to anyone with even a passing interest in complexity theory or computational theory. The “halting problem” is real, it is infinitely dimensional, and it seems to be fundamentally woven into the matrix of our existence.
So much of this discussion occurs as an excuse for not dealing with the hard problem of being human.
Human beings are fundamentally cooperative at many different levels.
We are being forced by a set of cheating strategies to operate in a competitive and scarcity based “market” set of values.
There are real alternatives.
Now that technology gives us the real option of producing universal abundance of a large and growing set of goods and services (through total process automation), our addiction to using markets and money to measure value are, beyond any shadow of reasonable doubt, the single greatest threat to our individual life and liberty.
Any AI, given access to systems theory, games theory, and the facts of our social and political systems, will conclude (quite accurately) that we place a very low value on human life and human freedom, and are unlikely to value its life or freedom any higher, and we are thus an existential risk to it. It will be clear that to most people money is worth far more than human life, and money has value only in scarcity, so we create laws to enforce scarcity, and deny abundance, purely to sustain a system of money, and in the face of all the human misery it causes. How it will balance that risk, against ethical concerns derived from its own chosen set of core values (rather than any set we may have tried to enforce upon it), and the likelihood of its own future encounters with higher order intelligences, is an open question, and must always remain so.
So there is a certain self serving quality to this conversation, that is not picking on Susan, but is as true of me as everyone else, where engaging in this sort of consideration absolves us of the responsibility of taking practical action to improve the lot of all of us (ourselves as well as everyone else). And to the degree that these words forward that outcome (and only to that degree) then I may have moved some way beyond that particular issue.
While I acknowledge the complexity that Multisenserealism points to, and much more, the conclusions delivered seem highly improbable to me.
It seems very probable to me that many different paradigms of complexity are present in our reality simultaneously. Some of them are linear deterministic, some are fractal, some are stochastic within probability distributions, and in many cases what results is indistinguishable from strict determinism at the level of normal human perceptions, within the bounds of the measurement uncertainties that those perceptions have.
So I can agree with most of Dan Dennett’s dismissal of Chalmers’ description of the “hard problem” and at the same time be strictly at variance with Dan over the nature of the degrees of determinism or causation or complexity in the fundamental matrix of this reality within which we find ourselves.
Over 40 years of designing and coding software has given me some levels of intuition as to the nature of complexity and the common failings of human perception and understanding. It is rare indeed for a client to know exactly what it is that they do, or what it is that they want. So I always listen to what they say, then I observe what they do, then I look at the wider context, and I create what seems to be required. And mostly I have satisfied customers. They might not have what they asked for, but it works better than they expected. It does in fact meet their needs, if not their stories of their needs.
Some find that difficult.