Nihilism and design

Daniels S posted a link to a new consilience project post.

Technology is Not Values Neutral: Ending the Reign of Nihilistic Design

[ 29/June/22 ]

Kind of – but to me it is a very biased view.

Yes -certainly, in complex systems all things influence all other things, often in ways that were not previously expected.

I find the sentence “But naive design has also brought us to the brink of catastrophic risks due to a principled neglect of concern for possible negative second- and third-order effects, in both physical and psycho-social domains” to be more problem than solution.

Rather than naive design, I put the primary influence on the social dogma present that is no longer fit for purpose.

The naive notion that markets can and will efficiently solve all problems is one of those. Markets are prone to vast set of pathologies, and the idea of marketing, of deliberately creating false impressions to persuade people to buy things they would not otherwise have bought, is one of those pathologies.

The issue is deeply more complex than this article tends to imply.

Certainly, the ways we use technologies have impacts – that is always the case.

A number of things are now clear beyond any shadow of reasonable doubt:

1/ that human beings are complex, more complex than any human awareness can possibly appreciate;

2/ that ancient stories that served our ancestors reasonably well in their relatively low technology and localised existence no longer work. We now have reasonable models of how life evolved, and it had little or nothing to do with any idea of god or gods. At least some of us now understand the foundational role of cooperation and fairness in the emergence and survival of complexity.

So blaming values on technology, when it really needs to be attributed largely to the stories that people accept, and the assumptions implicit within them that most fail to even notice, really does not seem entirely honest to me. It seems likely to be something of a smokescreen for some hidden agenda, unless it is just simple error and ignorance.

A later sentence “The idea is that our values come from churches, schools, and families, and these institutions and social processes then impact how technologies are used” is to me most of the issue. It leaves out choice. It leaves out personal responsibility. It implicitly accepts that human beings are nothing more than the stories they are fed.

That, to me, is the greatest and worst lie.

We are all capable of being much more than that.
We are all capable of asking questions, and making choices. And anyone who does that is going to make mistakes and will end up outside of “social agreement” on some issues. That is part of being a responsible thinking entity.

Accept it.

Do not let anyone take that freedom of choice, and the responsibility that necessarily comes with any real freedom, away.

Sure, churches, schools, families can have influences upon us, and we are each capable of seeing those for what they are and choosing responsibility greater than that.

As to the proclaimed “Technological Orthodoxy”, I don’t know what idiot thought “Humans fully understand the technologies they create”. That is just utter nonsense to anyone with even a passing acquaintance with complexity theory or quantum mechanics. Fundamental uncertainty is a foundational aspect of the modern understanding of complexity.

As for anything being firmly under anyone’s control – take a rally car out, and drive it down a gravel road at 250km/hr, then you will understand the difference between control and influence within constraints. It is influence (the soft form of control- containing uncertainties, necessarily) all the way every level of the complexity stacks.

Sure – technologies are themselves neutral, but the use of any technology always and necessarily has unintended consequences. Part of being responsible in complex systems is always being alert for unintended consequences, and adjusting practices accordingly. Markets do not necessarily promote that sort of responsibility, and it is what is demanded of us if we are to survive.

Technological progress is necessary if we wish to survive, without it we will at some uncertain point in the future share the fate of the dinosaurs. But technological progress does not necessarily require increasing size. It can be very high tech and very distributed, particularly once molecular level manufacturing comes on stream. And that is antithetical to capitalist dogma.

That article is so overly simplistic, that I just find it annoying and wrong at the same time, even if it is better than the things it is critiquing, they are so wrong that I had discarded them before I left my teens (and that was half a century ago).

It just is not helpful.

There are no viable solutions in that region of the solution space.

Something much more is demanded of us all.

Choice.

Responsibility.

Cooperation.

Respect for diversity.

Those values must be at the core of any survivable system. Of that I am confident beyond any shadow of reasonable doubt.

Only with those values firmly in place is it safe to develop the sorts of technologies that we really need for survival.

[followed by]

Hi Zachary,

Some truth in what you say, and it is possible for agents to enquire, to test, to observe, to build models.

There are some great ideas in written form if one can begin to explore the spaces of understanding and strategy and modeling. And that is a deep and broad journey. Many of the best modelling tools are mathematical, and one needs to explore may areas to begin to see the boundary regions and the failure modalities. Modern physics uses some complex mathematical ideas, and for many (like the Schrodinger equation) they can really only be solved in a practical way by making assumptions about low energy states and low degrees of influence and freedom (and such things do often work).

If one does a reasonable scan of the history of understanding, and one has developed a reasonable understanding of strategy and complex systems and quantum mechanics and biochemistry; then one can read something like Plato’s Republic as a warning, a 3rd order abstraction of the dangers of over simplification.

The more one looks at history, at the ideas from various traditions, theological, cultural, philosophical, mathematical, etc then one can start to get a feeling for the sorts of trajectories often present in the complex systems, and one can start to see those sorts of boundary regions where it is possible to generate something that reasonably approximates freedom.

If one spends a bit of time with modeling systems, particularly with MCMC simulations, then one can use that sort of general principle, applied to random search beyond accepted boundaries, to start to build levels of awareness and understanding that are not commonly discussed (if ever).

As a teenager, I was fascinated by Asimov’s Foundation series, and van Vogt’s concept of a Nexial Institute.

Once I discovered that there is an infinite realm of possible logics beyond boolean logic, beyond the mathematics commonly conceived, beyond even Goedel, then I started to see some of the mechanisms that evolution has encoded within us that allow for reasonable approximations to random search, and where boundary regions between order and chaos allow for reasonable approximations to free will.

And the deeper and more abstractly I explored those notions, the clearer it became that any level of freedom, without appropriate levels of responsibility and cooperation, is necessarily destructive of both liberty and security. Those conclusions hold at every level of logic and abstraction I have tested – and I have, on a few occasions, achieved double digit abstractions.

So I am just one person. One high IQ autistic spectrum individual, who has essentially pursued my own interests and paths and patterns of understanding for over 60 years; and I am about as far from consensus as it is possible to get; and I often work hard to achieve some level of consensus with others in the various fora I engage in (where it seems that such things might be usefully approximated).

Since 1974, since completing undergraduate biochemistry and becoming aware that indefinite life extension was possible; searching the space of possible strategic incentive structures for classes of systems and institutions that give potentially long lived individuals a reasonable probability of living a very long time with reasonable degrees of freedom, has been the key background context to my enquiries (and in a sense it started long before that – reading Heinlein in primary school might have been an early influence on my thinking).

I often find it nearly impossible to work out why anyone else does anything, as I have invalidated the vast bulk of assumptions that most people accept without question, and they no longer exist in my understandings (except as historical footnotes).

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) with reasonable security, tools, resources and degrees of freedom, and reasonable examples of the natural environment; and that is going to demand responsibility from all of us - see www.tedhowardnz.com/money
This entry was posted in Our Future, Philosophy, Politics, understanding and tagged , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s