Character of Technology – nature of risk

The Character of Our Technology

“We can’t influence the direction of technology, but we can influence its character,” says Kevin Kelly

Excellent piece John, but from my perspective it does not explicitly address the three major areas of concern:
1/ identification of core values;
2/ understanding of systems and complexity;
3/ identification and mitigation of risk.

All three are essential and tightly coupled.

1/ core values

For me, core values are simple:
Individual sapient life, and individual liberty, within responsible social and ecological contexts, applied universally.

In this sense a sapient individual is any individual (human or non-human, biological or non-biological) capable of modeling itself as an actor within its model of reality, and of using language to express abstract ideas.

Liberty in this sense is not a freedom to follow whim alone, but also involves a responsibility to assess the reasonably foreseeable consequences of actions and to avoid, remedy or mitigate any unreasonable risk to the life or liberty of any other entity – applied universally. And any such assessment of risk will be a matter of probabilities, and will involve a test of reasonableness.
More on this and its relationship to the other two areas soon.

2/ understanding systems and complexity.

This comes in three parts also, uncertainty, boundaries, and responses.
A modern understanding of systems and complexity is very different from classical notions of knowledge.
The classical idea that truth may be known, and is the object of human endeavor seems beyond any reasonable doubt to have been disproved.

We now understand many different sources of uncertainty, from simple error, to measurement uncertainty, to quantum uncertainty, to Goedel incompleteness, to maximal computational complexity, to irrational numbers, to chaos, to the fully random, to infinite classes of computational and algorithmic spaces.

Anyone who has seriously looked at the nature of complex systems and understanding itself must admit of infinite classes of uncertainty and unknowability, that reduce all understandings to matters of context sensitive confidence.

So first part is to admit of uncertainty, always – a touch of humility required.

Second part is to understand the necessity of boundaries.

Any form requires boundaries or relationships for its survival (depending on how one looks at it, they are equivalent notions, just expressed differently).

Without such boundaries or relationship, everything decays to a uniform randomness.

Any form of complexity has a minimum set of boundaries (or expressed differently a minimum set of relationships) that are required for its existence.
Maintenance of that set of boundaries, and no more, is a matter of existence.

Freedom must include the notion of being responsible for the maintenance of those boundaries that are actually necessary – and that will involve occasional testing as contexts change.

We human beings seem to be very complex evolved systems, embodying about 20 levels of very complex sets of cooperative systems.

In this sense, evolution is very poorly understood generally.

It now seems clear that the evolution of new levels of complexity requires new levels of cooperation; and raw cooperation is always vulnerable to exploitation, thus requiring sets of attendant strategies to detect and remove cheating strategies – all levels, recursively applied. This leads to something of a potentially eternal strategic arms race, through infinite levels of complexity. The price of liberty is eternal vigilance.

The common notion that evolution is all about competition is false.

Competition and cooperation are both aspects of evolution, and broadly speaking, competition results in simplicity, cooperation allows for the emergence of complexity. Which tends to dominate is all a matter of where the dominant source or risk resides. If it is within group – then competition trending to simplicity tends to result, if the dominant risk is from outside the group, and cooperation can mitigate it, then cooperation tends to spawn new levels of complexity.

This leads into the third aspect of complexity, the sorts of management responses that are appropriate in different classes of complexity. This is an infinitely complex topic, and David Snowden’s Cynefin framework for the management complexity ( https://en.wikipedia.org/wiki/Cynefin_framework ) is the best simplification of that complexity that I am aware of.
The more complex the systems, the more flexible our management responses need to be.

3/ Identification and mitigation of risk.

When faced with potentially infinite complexity, risk is always present, and the unknown always exceeds the known – eternally.

Being over confident leads to operational simplicity right up to the point of overwhelm and extinction by a risk that was not detected and mitigated. This is the risk that the Amish ignore.

Our explorations of geology and cosmology and biology have already identified sources of risk that may not be mitigated with existing technologies, like comet and meteor strike, supervolcanoes, extreme classes of solar flares, some classes of viruses, etc. So in our explorations of technologies to create effective mitigation strategies to those known risks, we inevitably create new sources of risk we did not previously have to deal with.

That too would appear to be a recursively consistent aspect of reality.

So once again, nothing is certain, except that ignoring known sources of risk is not a mitigation strategy, and will most likely lead to extinction.

So, considering all of that – where does that leave us with finance?

Here, one needs to consider deeply the abstract relationships that are embodied in money.

Money is an abstract measure of value.
It works only because we believe it will work.
It is based in trust and belief.

Markets are reasonable tools for measuring the value of items that are genuinely scarce, but fail to assign any value to items that are universally abundant.

When most things were genuinely scarce, that meant markets delivered a really useful measure of value.

Now that fully automated systems can deliver any information product, and many material products, in universal abundance; markets cannot measure their value.

The response to date has been to create artificial barriers to such abundance, to create/maintain their value in markets.

So in order to make money, we have laws that deny the majority access to that which could be available to them at close to zero marginal cost. All of our IP laws, copyright, patents, most of our health and safety and certification laws – are essentially present to create market value where none would otherwise exist. They are present to prop up a system that is past its “use by” date.

Fully automated systems are absolutely required to mitigate many of the known existential level risks.

Fully automated systems could deliver all the reasonable needs of life and liberty to every person on the planet, but don’t because of the “market incentives” present.

Markets cannot deliver a positive value for universal abundance, yet abundance is a positive value for most humans in most contexts (sugar, and many stimulant drugs being obvious exceptions; air and water the obvious positive examples – and it is always possible to have too much of a good thing).

And markets have performed many other very useful functions, other that simply measuring value and mediating exchange, complex functions involving distributed information transfer, distributed information processing, distributed governance, distributed risk mitigation, the interlinking of distributed trust networks, etc.

These are very complex, multi-level and essential functions.
They can be done with other mechanisms, and those other mechanisms need to be actively developed, tested and deployed.

So it seems that markets have now moved out of the territory they once occupied of being intimately linked to life and liberty, and with the changing context of the exponential development of fully automated systems, are now the single greatest source of existential risk to humanity as whole.

How we plan the safe transition away from market based systems, to distributed systems of trust, governance, and risk management, are the great questions of our age.
Universal Basic Income seems to be a useful part of an intermediary transition strategy.

Getting people to see that the things we have traditionally associated with markets – the things that have supported life and liberty, are not actually attributes of markets themselves, but only have the attribute of traditional association, is not an easy task.

Reality is far too complex for any human mind to deal with in its entirety.

All of us have to make simplifying assumptions.

The simplification that markets equal liberty worked in the past, but for all the reasons outlined above is failing now, and the rate of failure is exponentially increasing.

That failure will be hard for many to see.
Yet the benefits of that failure far outweigh the costs.

The technology that creates the failure of markets allows us to address and mitigate existential level risks that markets cannot, ever.

So we are in a time of profound change.
All change has real risk associated with it, all levels.

And ignorance of risk is not a risk mitigation strategy, though it is an anxiety mitigation strategy.

Never confuse anxiety with risk!

The horses of the Amish will never offer an effective mitigation strategy to super-volcanoes, comet strike, ice age, or super flare. Fully automated systems can, for those and many others.

[followed by]

Hi John,

As usual, we agree far more than not.

We are certainly in a mixed mode at present.

As stated, any information product could be delivered universally today, but isn’t, because of market incentives.
The human cost of that is huge, and growing.

Certainly, some things will always be scarce – originals, some heavy elements.
And with a little creativity, we can create alternatives that are functionally indistinguishable in most contexts.

A longer discussion is certainly warranted – and we have been in this discussion for a few years now.

And yes – markets, money, and ideas, and set of relationships, can be thought of as a technologies, which will have different impacts (risks, benefits) in different contexts. Choices of interpretive schema are as important as any physical context.

I spent last night in a little hut in a colony of Hutton’s Shearwaters, waiting for 2 birds with GPS and depth loggers attached to return to their chicks so I could recover the machinery and the information therein. As only two birds are left to recapture I made up a little alarm circuit with trip wires across their burrows. The tech worked perfectly waking me when a bird did enter the burrow, but not one of the birds I wanted, their mate. So I slept on my high tech ultralight airbed, in some of the most amazing scenery on the planet.

The technology available to us now is amazing.
We can put gadgets on these little birds (the birds are only half a pound each) that tell us where they go, and how deep they dive.
And we now know that they can travel 400 miles over 3 days at sea, diving over 100ft down, hundreds of times, to bring back 2 oz of food to their growing chick.

We are starting to understand so much more about biology, about the complexity of the connections of different species.
The phosphates these birds bring back to their mountain burrows are a major source of the productivity of the ecosystems in these mountains.

We humans need to stop thinking mostly about money, and start looking very closely at the key factors that actually keep us alive – particularly the nutrient and energy flows through the systems (not just our human economic systems, but the wider systems within which we are embedded).
No money in it, and our survival is at stake.

In order to maintain cooperation, we require abundance – that is games theory 101 in a very real sense.
Driving systems to their limits for short term economic gain does pose very real existential level threats from a systems perspective (if those systems collapse and destroy the fragile abundance that is keeping our society as peaceful as it is).

We cannot afford another major conflict.

We need a new level of global cooperation – universal, without cheats.

We need global abundance.

Any centralised system poses too much risk – so we must have decentralised and massively redundant systems (like biology does).

If we don’t take the big picture view – the results are not going to be pretty.

And I am confident that we can do it, and it is a 70/30 thing, not the 99.99999% thing that I would like it to be.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in economics, Ideas, Our Future, Philosophy, Technology, understanding and tagged , , , , , , , , , , , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s