[ 12/June/22 ]
I agree in part with Aric and Ken and Matthew and Ron.
We need technology to solve many classes of well characterized long term risk that cannot be solved without very high technology, but the current forms of social organization and many of the systems of understanding in common use are not sufficiently advanced to allow the deployment of the necessary technology safely.
Like Ken I have clear memories of when there were just over 3 billion people, of sitting on a hillside in October 1962 listening to the radio reports of the Cuban missile crisis, and starting to think deeply about the strategic context of the future of humanity. A lot more thought and a lot of investigation has happened in the intervening years.
Back then I held some things to be true.
Now all of my non trivial understandings in respect of reality involve uncertainties and probabilities. Some particular logics and some particular mathematics allow for certainty within their restricted domain spaces, but Goedel showed that those spaces are limited, and not general.
It seems beyond reasonable doubt to me that the universe in which we exist contains multiple classes of fundamental uncertainties and unknowables. And there are many more classes of logic than the simple binary logic of True/False, with consequential strategic topologies.
It seems clear that most people have very simplistic models of most of that reality, and for most those models contain various sets of Truths and Faiths that may not be challenged, and thus limit their ability to explore possibilities and comprehend uncertainties.
There are certainly multiple levels of selection pressure in evolved systems to minimize energy used in computation, and to minimize time to solutions of computations (particularly strong when under stress). We all have multiple levels of such biases in our neural networks, necessarily. Becoming aware of them all, and overcoming their short comings, takes a great deal of work, and few have the time or inclination to embark on that process.
So it seems beyond any remaining shadow of reasonable doubt that we all live in our own personal subconsciously generated models of whatever reality actually is, and all will be inaccurate to various degrees.
That happens at multiple levels, necessarily.
We all have strong biases to prefer simplicity over complexity, even when the complexity is real.
One of the limiting simple ideas is the idea that evolution is all about competition.
Competition is certainly an ever present aspect of evolution; but when one looks deeply at the strategic contexts of the emergence and survival of new levels of complexity, it is true to say that the emergence of all new levels complexity are predicated upon, and sustained by, new levels of cooperation.
In this deep strategic sense it is far more accurate to say that the evolution of complexity is all about the emergence and stabilization of new levels of cooperation.
The current dogma present in economic and political circles that competition is fundamental to survival is actually wrong, deeply so; and poses existential risk to our species.
We must have competition. The competition of ideas is part of freedom in a sense, and we all need appropriate levels and degrees of freedom. And at any level, all out competition destroys freedom, complexity and security.
At every level, there is a fundamental systemic strategic demand for cooperation if there is to be survival. Any system that fails to achieve that in reality will be eliminated by the process of natural selection in reality, given sufficient time and variations in context.
We, as a species, need to accept that we need fundamental cooperation at all levels of diversity.
At any level, every level, enforced hegemony is the polar opposite of freedom.
And also at every level, any level of freedom that fails to acknowledge the necessary sets of boundary conditions required for the maintenance of complexity at that level, is destructive of complexity.
Freedom without Responsibility and Cooperation necessarily self terminates, all levels!
So yes – we need high tech, in order to be able to mitigate many classes of well characterized existential risk; but one of those classes is; that technology without appropriate levels of cooperation and responsibility, necessarily leads to destruction.
Current political and economic dogma does not contain adequate levels of either cooperation or responsibility.
That must change.
It must change quickly.
That is possible, and it requires building diverse sets of trust networks, so that required concepts can actually be transmitted through those networks as needed.
We all need appropriate degrees of both freedom and responsibility.
Central control removes both, and is not a survivable option (long term).
What is required, is cooperation and diversity.
When I look at life in the most abstract way possible, what I see is random search across the space of possible systems embodied at recursively more complex levels.
If we are to survive the unknown unknowns, then we must continue to search, eternally, and that search is not without risk. Yet if we are to have any possibility of long term existence, either as individuals or as a species, then we must continue to explore the unknown, for solutions to the already known risk as well as for risks we are currently ignorant of.
This is the only way to minimize risk, and it requires accepting that risk is an eternal part of existence, and can only be minimized by eternal exploration. AI doesn’t change that, and will, like us, still be faced with eternal search of an infinite class of infinities; though it can certainly search some classes far faster than we can; it will eternally remain true that for a fully loaded processor, random search is the most efficient search possible (recurs as deeply as you are able).
So yes, we need high tech, we need responsibility, each to the best of our limited and fallible abilities; and we need fundamental change to economic and political systems such that they accept that cooperation to maximize security for all levels and classes of agent is a fundamental precondition for long term survival of complexity such as we are. We can build any levels of competition we like on that base, but that base has to be secure and sacrosanct.
Once we have that base, the technology to clean up the many levels of mess we have made is relatively trivial; but without that base, such technology would create more risk than it would solve.
Once we have that base, we need to accept that we need to close all of our loops of material usage. Recycling needs to become a part of all manufacturing and biological processes.
Our political systems need to be explicitly designed to distribute as much decision making and responsibility as possible to every level of agent present. Governance needs to be explicitly about identifying the boundaries of responsibility, and about setting up appropriate structures for exploration at and beyond the boundaries. There can be no hard boundaries in complex systems, such boundaries as must exist need to be eternally iterative and permeable and responsive to changes in context.
And we all need to accept that we are both individuals and social entities, and that both natures are essential parts of being human – eternally. Both come with demands for both freedom and responsibility, if they are to survive.
[followed by Ron replied “The question was: “To solve ecological problems, do we need more or less technology?”
Your answer was that we indeed need more technology, but we also need to change our economic and social systems.
I’m not saying that you’re wrong. But the last part is outside the scope of the original question, which is why I didn’t touch it.”]
Agreed – more technology is necessary but not sufficient.
[followed by Aric replied – if there was a major threat to humanity, say, a giant asteroid on a direct collision course with the earth, and the estimates are that if it hits, nobody would survive, how would that impact how countries behave?]
For me, the probability of such a thing is asymptotically approaching unity. The when of it is entirely uncertain.
For me, it is a risk that must be mitigated, if we are to have a truly long term future.
That requires some very high tech.
Technology of that power requires a universal acceptance of the need for cooperation in diversity that respects the life and liberty of all agents that are not a direct and unreasonable threat to the life or liberty of any other agent.
And liberty in this sense has to include a notion of responsibility across all of the identified classes of boundaries required for the levels of complexity present.
Developing such technology without an acceptance of responsibility at these recursive levels instantiates far more immediate risk than the risk one is attempting to mitigate.
Cooperation really is fundamental to survival – long term.
There really is no escape from that.
It is only some form of willful ignorance, usually perpetuated by some class of agents upon other classes of agents for the short term benefit of a few; that allows the current state of blindness to this fact of strategic existence. It is a form of suicide for short term gain.