Hi Dil & team,
I have a suite of issues with the proposed ethic of “[i]Do what you wish, unless of course you experience some doubt about the ethics of what you intend. In this case, consult the public Ethical Framework and see what it has to offer in the way of considerations you might bring to bear in resolving your concerns.[/i]”
It is not responsible.
The universe is not devoid of rules.
Gravity exists, as do the other 3 fundamental forces, and all the systems that evolve from them and the fundamental uncertainties present.
Evolution seems to have assembled us by a process of differential survival.
What few people seem to get is that every life form alive today (from viruses to us), is equally the result of that same evolutionary process – just that the contexts and timings of events have been different in the life histories of the different populations.
Evolution hasn’t preferentially selected us.
Evolution has selected all life now living – equally.
It is now clear, beyond any shadow of reasonable doubt, that we human individuals are very complex entities, involving the instantiation of some 20 levels of cooperative systems (both hardware and software).
Within each level, and between each level, systems influence each other, and have operational limits beyond which they cease to retain coherence (and we become unconscious or die).
This seems to be the physical reality of our existence.
Yes – there certainly is a sense in which we can act in any way we choose, and there is also a very real sense in which most possible actions have negative existential outcomes (lead to death in the short to medium term).
If one is choosing to optimise anything at all, and one is in a situation where technology seems to be delivering more powerful tools, and greater opportunities and greater security with time, then whatever it is one is trying to optimise, one is more likely to do so by continuing to exist. Thus existence, continued life, is foundational to all other things.
Thus one can derive an ethical foundation that is below choice, which is existence itself.
Thus – one gets to the ethical premise:
1/ value individual sapient life (human and non-human, biological and non-biological), and take all reasonable steps to mitigate any unreasonable risk to any sapient entity.
2/ value the individual freedom of all sapient entities, provided such expressions of freedom do not pose an unreasonable risk to the life or liberty of any.
Given that our existence as thinking entities with language is predicated on developing language and technology (and all other such abstract modes of thinking and or communication as we manage to instantiate) in social and ecological contexts, then our existence in such social and ecological contexts demands of us a level of responsible action in these contexts.
The evidence is clear (beyond any shadow of reasonable doubt) that there are not, nor can there be, any hard and fast rule sets that work in all contexts.
The evidence is that at all levels there exists fundamental uncertainty, randomness, and novelty.
We exist in an open complex system, which has components that are far from equilibrium, and are infinitely extensible into novel territory.
So promoting the idea of simply “doing what you wish” is not responsible, not ethical, and not sustainable.
Sure we all have our wishes.
Sure there is no absolute standards of anything (other than existence itself, whatever that may be, and no guarantees of it continuing).
And the evidence is beyond all reasonable doubt, that the default systems we inherit from genetics and culture have been tuned by survival of our ancestors to the conditions of our past, and are not necessarily applicable to our exponentially changing present and future. So simply following our default “feelings” is not any sort of guarantee of survival.
And there is a really complex balance here.
None of us has perfect information.
All of us, however deep and abstract our understandings, have those understandings based in necessarily simplistic heuristics (reality is far too complex for any human mind to deal with in anything other that contextually useful simplifying heuristics).
So it doesn’t pay for any of us to get too hubristic, and be too overconfident about what is possible and what isn’t, what is safe and what isn’t.
We all have to be conscious of the very real uncertainties present – always.
We need to be prepared to have conversations about risk, and not necessarily accept any particular set of rules as being contextually relevant, but don’t dismiss them lightly either – an art, a responsibility required.
So there must always be an aspect of art, and an aspect of “best guess” to all our decisions – however much logic and computation and systems knowledge we employ.
Sure, we have to leave the absolute certainty that our ancestors craved in history, and accept profound uncertainty as our constant companion.
And accepting that uncertainty does not absolve us from a moral responsibility to care for life and liberty.
Ethics is, always has been, always will be, a whole lot more than simply “do what you wish”, and what we wish has to be a very important part of it; along with the existence of ourselves and others, and all the uncertainties, unknowns, and unknowables of existence.
Ethics must involve an aspect of social and ecological responsibility in all choices.
The more we know, the more we know we don’t know, and the less confident we become of the things we were once absolutely certain of.
The thing I am most confident of, is that the lack of absolutes and the presence of uncertainties does not mean that “anything goes”. If “Ethics” has any meaning at all, it means that we need to be even more responsible in making our best guesses at the likely long term outcomes of our choices; and we need to be responsible for the likely social and ecological consequences of those choices (short, medium and long term).
No system of rules can do that. It will always involve individual choice, individual responsibility; and that is a very different thing from an unqualified “what you wish”.
Hi Dil, Cedrick and team,
Interesting points both.
I spent and hour yesterday in a skype chat with Daniel Schmachtenberger. We both have a long history of enquiry into existential risk, and effective risk mitigation strategies, and the discussion got quite abstract quite quickly.
We both agreed the general thesis that given that narrow (ish) artificial intelligence has now beaten the best human player at Go, that any competitive system that can have an outcome specificied can now be gamed more efficiently by AI than by any human “player”. This applies to all existing financial, political and “sales” (advertising) systems (abstract to any level desired).
Apparently there have already been active agent models scenarios run that have a rather short (less than 2 decades) existential threat level outcome on all competitive scenarios.
That doesn’t surprise me in a sense, and I am surprised that someone has already done the modelling and run it (median 12 years).
I have been consistent about the threats from competitive systems for some time.
My major focus has been on effective transition strategies, and the urgency of their implementation.
I hadn’t realised just quite how urgent that is.
From a games theoretical perspective, Ostrom et al have catalogued a suite of examples of stable systems in practice; and while I find her 8 principle insufficient, they do point in the general direction of something.
I have been thinking about and investigating existential risk and risk mitigation scenarios (with all the epistemological and ontological intertwinings therein, for a little over 55 years).
Crowley, particularly his intersection with Machiavelli has been a concern, and there appears to be only one semi-stable solution “eternal vigilance”.
I am clear, beyond any shadow of reasonable doubt, that our survival as a special requires the adoption of a cooperative framework that is strategically structured in a way that is not gameable or captureable by any single entity. With the instantiation of narrow superhuman AI, any primary competitive game poses existential level risk.
We are a cooperative species.
A new level of strategic cooperation is required.
It cannot be naive cooperation, that is capturable.
Recurs this structure as far one chooses (potentially infinite).
It is clear that one cannot use rule based systems based in any approximation of lowest common denominator, as that imposes unacceptable risk in and of itself.
So we are, as Jordan Peterson so masterfully shows, back on the eternal boundary between order and chaos. As a society we have gone too far into the domain of order and that order itself now poses immediate and immense existential level risk.
All of the points that Dil and Cedrick raise are real and valid, and they pale in the face of the current existential level threat posed by competitive systems and super human narrow AI.
We need to have systems that do in fact work to mitigate the risks from the twin tyrannies (majority and minority), that empower both individuals and communities – and that is a non-trivial issue in terms of strategy and complexity.
It seems beyond reasonable doubt that viable solutions will involve trust, openness, integrity and reasonableness; and there must by real systemic solutions to the real systemic risks.
Transition strategies are also going to be a major issue.
It was great to see Ray Kurzweil coming out in favour of a Universal Basic Income, two months ago, and I can only ever see that as a semi-stable transition strategy with a relatively short life.
We do definitely need to go far beyond that, and David Snowden has many important insights in that area, as does Jordan Peterson, and to my taste both stray a bit too far in direction of religion, and I can certainly understand why that is so.
The sorts of “implicit heuristic knowledge” that are deeply encoded in the genetic and cultural aspects of our being need to be seen as what they are – heuristics that worked in our past, and are not necessarily relevant to our future.
We ignore them at our peril.
We are at peril if we rely upon them too heavily.
There is a very deep, and very delicate balance that we must achieve, and soon.
And it is one that must be ongoingly instantiated, it is not something that one can set and forget (that is something that Friedman and the rule based guys got terribly wrong).
So yeah – interesting times.
And I come back to the simplest possible formulation:
Life and liberty, in a context of uncertainty and humility and responsibility and non-naive cooperation.
A naive solution is any attempt at cooperation that does not include active (open) strategies for the detection of cheats, and with sufficient power to remove all advantage gained by cheating, plus a little bit (but not too much more – a fairly narrow band there that actually encourages individuals to rejoin the cooperative).
The really difficult problems then become around distinguishing real novelty from cheating, as they may be indistinguishable from the old paradigm, and require transition to the new paradigm for evaluation – and that can get highly recursive and have hidden risks.