Foundation of Logic – rights

Mark Hubey’s post in Fundations of Logic

If x has the right to free food
who’s duty is it to provide it?

This is a really complex topic.

The idea that rights exist independent of responsibilities seems to provide an existential level risk to humanity.

Complex systems require boundaries of some sort to support their complexity.

When one looks closely at the probable evolution of complex systems like ourselves, it becomes clear that cooperation is essential to the emergence of all new levels of complexity, and cooperation is always vulnerable to exploitation, and thus requires secondary strategies to detect and remove the cheating strategies.

Reality is complex.
Our relationship to it is changing, exponentially.

For most of recorded human history we have needed human labour to produce most of the good and services that we enjoy.

For the last few hundred years we have enjoyed increasing security and diversity of goods due to two major trends, more efficient machines, and specialization. When most things were genuinely scarce and people were needed to work, and extra levels of specialization provided extra goods and services, then markets provided a reasonable method of distributing incentives, and also provided many levels of risk mitigation by distributing many essential functions of computation, cognition, governance, risk assessment and risk mitigation.

We are now developing technologies that allow for the full automation of any level of specialization of knowledge and production. These abilities fundamentally alter the dynamics of the systems that once arguably served us reasonably well.

So now, many of the heuristics of our past simply dont work. The idea that “there is no such thing as a free lunch” (TISNSTAAFL) doesn’t work when all of the processes involved are automated.

Such broad abundance has no historical precedent.

Abundance has been there for us all, but only in a limited sense, and we tend to not notice it. We all breath. The air is almost always there. We don’t pay for it.
Air is arguably the most important thing to every one of us.
It is free.
For everyone.

We are now in the very complex territory of:
Given that we can automate anything, we can supply food to everyone, without requiring the time of anyone else.
And everything has consequences.
No action is free of consequences.

If we supply everyone with food and healthcare, and the keep breeding, then we run of resources quite quickly.
We have a little bit of time to educate, but not a lot.

If we are to extend lifespans indefinitely (now a real possibility), and provide every person with the resources to do whatever they responsibly choose, then every individual needs to be responsible,
We need to limit family sizes, to one child per couple – on average – leading to a doubling of our existing population.

So the idea that the right to free food imposes a duty on anyone to provide it isn’t actually accurate.

And the idea that we all need to be responsible for our impacts on the ecosystems and cultures that we are part of is the essential flip side of freedom, if it is to have any reasonable probability of surviving for very long in the sort of reality we seem to exist in. And that is an extremely complex set of systems, deeply cooperative, deeply complex.

So for me, the idea that anyone need have a duty to provide is an invalid assumption, equivalent to saying someone has to supply the air that any one of us breaths.

We are capable of designing and implementing systems that require no further human input to continue producing their outputs.

The only reason for scarcity of the outputs of such systems is trying to maintain the scarcity based value system of markets.
We can design abundance based systems that perform all of the essential distributed functions outlined above, but we need to do so.

So we are in an interesting transition zone in the systemic landscape; that very few people are yet conscious of.

[followed by]

Hi Pawel and Dirk,

Over simplification and focus on control is part of the problem, certainly, and only a small part of a much deeper set of issues.

Yes feedback is important. Certainly many of our systems require levels of structure that are maintained by groups of feedback systems. That is true, and it is an essential part of a much more complex picture, as we are very complex sets of systems. Most who have investigated the subject are attached to certainty, and have difficulty accepting the idea that uncertainty might be a fundamental aspect of systems.

@Dirk –
This is a really complex topic because it delves deeply into the nature of cognition, the evolution of complexity, the many levels of the systemic nature of us. At many levels we are instantiated sets of complex adaptive system, all interacting.

Every level of complex systems requires a set of boundary conditions to maintain the structure that is present, even as that structure is capable of exploring ever expanding “territories”. There is a fundamental tension necessarily present between maintaining enough structure to sustain the pattern present, and being sufficiently flexible to meet new challenges and opportunities. That must eternally be the case, as we explore ever deeper into any set of “territories” (physical, behavioural, logical, mathematical, computational).

Yes we are individuals, and we require help to survive and grow, and culture to develop into anything more than animals.

Cultures have developed many sets of heuristics that were useful approximations to something, and worked (in terms of surviving) in the contexts they evolved in. When one looks at biology and culture from an evolutionary perspective, with an understanding of control systems, and computational theory, and complexity theory, then one starts to get a feeling for just how poor are many of the models that currently exist in culture, and how remarkable it is that we are managing to survive.

If one is to posit a right as a notion of anything, then it is done so as a measure of value in a real sense. Like the idea of the right to life.
America as a nation was founded on a set of positive rights (life, liberty and the pursuit of happiness – posited as inalienable).

In a sense it was a nonsense; and it was a positive statement in opposition to an externally imposed tyranny. So in this sense it had a certain appeal, and a certain degree of utility, in the context of its time.
But it ignored many essential aspects of reality, like the power of cooperation for mutual benefit.

Hayek captured an aspect of this, when he correctly identified the power of cooperative activity, and the ability of individuals to cooperate for their mutual benefit, but he lacked the intellectual tools to understand the depths and intricacies of the relationships present. He oversimplified something vastly more complex.

Of course you are correct in a sense, rights exist only if we say they exist. They have no existence independent of that.

If you say human life has no value, then you can act in a way that uses other humans merely tools to achieve your own ends. That is a possible way of acting, and it is highly unlikely to produce a very positive outcome in the long term.

An idea like that fails to acknowledge the reality that complexity such as we are is predicated on multiple levels of cooperative activity.
It fails to understand that competitive modalities drive complex systems so some set of local minima on the complexity landscape and keep them there.
Only cooperative systems allow the exploration of new strategic territory, and cooperation is always vulnerable to exploitation, and thus requires attendant sets of strategies to detect and remove “cheats”. As a biochemist by training, I see many levels of such things instantiated within us; in the molecular mechanisms of DNA replication, in the mechanisms of protein synthesis, in the operation of organelles in the cytoplasm, in the communication between cells, in the operation of our immune systems, etc on up through many levels of organ interaction, behaviour and culture.
At every one of those levels, the sort of complexity that we are has emerged as the result of new levels of cooperation stabilised by sets of attendant strategies.

In this context, the positive statement of a “right to life” is a heuristic that supports cooperative activity, and does impose a level of responsibility (required to prevent exploitation).

Any right, any privilege, carries with it this level of responsibility, if it is to survive very long in reality.

Certainly such rights can be claimed without acknowledging the responsibility, but such systems cannot survive long in reality – the mathematics of that is beyond any shadow of reasonable doubt.

I enjoyed reading Nietzsche, and I was impressed with what he did with the limited tools at his disposal (as I have been with many others in the development of human culture); but that doesn’t mean that he was “right” in anything but a very simplistic sort of sense; the sort of sense that child making a house out of 20 leggo bricks is like a house. Yes, but NO! Only in the very simplest of senses, and it ignores almost everything that makes a house useful and safe and livable.

Nietzsche was far more wrong than he was right, and he did certainly start to grasp some important ideas; but failed completely to understand much about the depths of the complexity he was dealing with. Just scratched the surface.

I am not at all in favour of coercion based on collectivism, but that is only one of the twin tyrannies – the tyranny of the majority; the other is the tyranny of the minority (or one).

This is not an “either or” sort of thing.

It is a “both and” sort of thing.

Yes we are individuals AND yes we are parts of many different levels of cooperatives.

Both are necessarily true, and both impose duties if survival is an objective.

If you want to self destruct, then by all means, behave however you like, think whatever you like.

If, however, you wish to continue to exist, then you must acknowledge all the systems required to allow that.

And very few people have even the beginnings of an understanding of the systems and relationships that do actually sustain their existence (in the long term).
Nietzsche at least got that much right, even if almost all his specifics were inaccurate over simplifications.

___

As to the relationship of the human mind to reality, that is changing, far faster than most have any conception of, but not evenly. People accept and use tools like smart phones and computers, even if they have no conception at all of what actually makes them do what they do.

The reality of human thought has always been that there has been a broad distribution of relationships to reality; and some have always been beyond the understanding of most.

The written word has expanded that possibility, allowing conceptual systems to jump across multiple generations far more effectively than oral transmission.

Modern digital transmission makes all of that available to anyone with sufficient interest.

In the ancient world, the word was literal Truth.

Some still exist that way.

The two worlds theory is an over simplification, and it had a certain utility.

I have worked with automation for 45 years, starting on an IBM 1130, and continuing to this day running a small software company that I started 32 years ago.
I do understand something of the complexity of the task involved in creating fully automated systems, and I have seen most of the milestones on that process achieved over the last 30 years, many in the last decade.

It is the case today, right now, that an AI system can beat any human in any defined game space. Reality is currently too complex, and that is changing rapidly. AIs can already drive cars better than humans, and that is a reasonably complex problem space. They still make mistakes, but fewer mistakes, on average, than humans do.
They are more than doubling in capacity every year, we are not.

The “we” who will create these things is human beings.
People who like to collaborate to create things of value.

And how we value things is important.

Markets measure a particular type of value, value in exchange.
Markets cannot measure value when there is no exchange – as in when you breath.
Do you really believe that your breathing has no value to you???

Air is universally abundant.
It is a free lunch.

Sure, there are systems that maintain it, but no person does that.

We are very close to being able to produce any definable good or service without another human having to be involved.

When we reach that technological point – what point exchange?

Sure we can all choose to do whatever we reasonably and responsibly choose, and both of those aspects “reasonable” and “responsible” are necessary for survival.

If survival isn’t a desired outcome, then they aren’t required.

If someone just wants to play power games, and survival is not a required outcome, then anything goes.

But if survival is an outcome, if life itself has any value, then reasonable and responsible are necessary attributes of the systems.

We are individuals in complex social systems; and both aspects of that reality are essential.

The transition, is a transition from scarcity to abundance.

A transition from depending on the current labour of others, to having sufficient complexity encoded in our automated systems that we can be independent of any and all others if required.

That is simultaneously powerful and dangerous.

It holds the possibility of freedom for all, empowerment of all; and simultaneously it removes the need to have others around; so potentially unleashing the worst excesses of individual tyranny.

Such dangers are inescapable – all levels,
And von Neumann’s Mutually Assured Destruction has come very close to delivering exactly that far too often in the last 50 years. A very close approximation to insanity.

[followed by]

Hi Dirk,

The right to life is a positive right in a systemic sense, as it implies space and energy to exist. Depriving someone of food is a negation of life, however it is achieved (social systems, philosophical systems, mechanical systems, etc).

There is some sense in the distinction you make about positive and negative rights, but in reality neither can exist without the other.
One cannot have a right to be left alone, without a space to be left alone in. It is never as simple and clear cut as that over simplistic model would suggest.

Everything does actually impact everything else.

And certainly, there is a boundary there; and it is a very complex boundary – one that requires reasonableness and responsibility at every level one approaches it.

Is it reasonable to allow another to acquire so much that there isn’t enough energy or space left for others to survive?

Is it reasonable to keep reproducing when the actual energy limits of the system are being approached, such that every new birth pushes the system deeper into instability that reduces the survival probabilities of all?

We’re not in that situation quite yet, and we’re not as far from it as some would like to believe.
Photosynthesis is not very efficient.
Most biological systems are only about 1.5% efficient at turning sunlight into chemical energy. We can bioengineer that to about 7% reasonably easily, and there will be strong selection pressures to go back to the less efficient system in most natural ecologies, so we would need active management of such systems (which comes at an energy cost – so such constructs are not viable with today’s technology, but should be quite viable to mass produce by 2030, provided the development work is started soon).

It is a seriously non-trivial set of problems; maintaining agreements across conceptual systems that can barely acknowledge the existence of each other, let alone engage in meaningful negotiation, about the nature of what constitutes reasonable freedom, reasonable security and reasonable biodiversity. At the extremes of the distributions there is no common ground.

I can imagine systems where we are able to actively manage all the risks from weather, earthquake, volcanism, global warming, pandemics etc. Such technological toys (while beyond comprehension for many) are trivial compared to the complexity of communication between some of the conceptual systems that seem to be present on this planet right now (yet alone the ones that will very probably appear quite soon – next 15 years).

Arthur C Clarke is famous for many things, and one of them is the quote “Any sufficiently advanced technology is indistinguishable from magic”.

Coercion doesn’t require agreement or a state, only sufficiently advanced technology.
Sometimes, those that think they are in control of the technology, cannot even distinguish clearly what the technology is, or what it is capable of.

We are in exceptionally dangerous “territory” when many people are committed to non-cooperative modalities, and do not yet realise that the ability to destroy us all is already universal.
We cannot continue without higher technology, and empowering many existing conceptual systems with higher technology simply poses unacceptable risk to life for others.

There is a way out, and it requires accepting both uncertainty and diversity.

As von Hayek said – capitalism is misnamed, it is actually a system that works because it is based upon trusting strangers, But such trust cannot be naive and survive.
There must actually be real benefit, for all – or everything collapses.
Once trust collapses, we revert to competition, to hoarding, and war is inevitable. Then all is lost.

If you look at Snowden’s Cynefin diagram, there is actually a cliff between simplicity and chaos. If one oversimplifies a complex system, then it is only a matter of time before it falls over that cliff. Rule based systems necessarily do that – always, all levels!!!

How much more difficult is the issue when many cannot even conceive of such complexity, and demand simple rules where none can possibly survive (and many libertarians are in that camp by definition, and I am definitely a fan of life and liberty, and both need to acknowledge the many levels of complex systems that do and must actually exist, and the infinite class of the possible that we are rapidly exploring).

[followed by]

Hi Dirk,
I disagree that positive rights require enforcement.
Positive rights can impose a duty of care, and that duty of care can be in the long term self interest of the individual, and there may be elements of enforcement in the larger systems involved if some individuals have not yet developed sufficient awareness of the context or their own self interest (in a similar sense to the way that parents provide instruction to children).

And that rapidly gets extremely complex, as any set of agreements between individuals must necessarily be a long way behind where active individuals have explored – because the communications bandwidth within a brain is far higher than that between brains. That is countered to some degree by the advantage of the different skill sets that different individuals bring to conversations. And if an individual is active (and I have been active in up to 30 different organisations at a time, mensa, political, philosophical, scientific, cultural, environmental, sporting, etc at a time, as well as running my own businesses and family) then one can use the multiple perspectives so developed to aid in going well past where most have gone.

That is why I have an issue with Nietzsche . He was great to the degree that he identified ladders, and identified problems with the conceptualisation of morality that he was taught. The issue is that he simply did not have the conceptual tools available to give a cooperative evolutionary explanation for the development of morality as an essential aspect of cooperative intelligences coexisting; and nor did he have the games theoretic or computational theoretic tools to see how random exploration of complex unexplored “territory” is the most efficient possible search algorithm.

So he correctly identified some issues with morality, got the master/slave thing almost completely wrong (though there is an aspect of that possible in some contexts, but those contexts are self limiting in the greater scheme of things).

Rousseau is interesting in a historical context only, but his entire thesis is conceptually crippled, as it does not understand the recursive conceptual power of cooperation in evolutionary contexts; and thus misses entirely the heuristic power of morality in its many forms (and of course, naive cooperation is always high risk, one requires attendant strategies for stability).

Certainly, states are capable of cruel and brutal violence, but no less so individuals who are not sufficiently aware of their own long term self interest in universal cooperation. So in evolutionary terms, one would expect to see the sorts of things we generally see in the various levels of “cultural entities” present (states, religions, belief structures, clubs, associations, traditions, disciplines, whatever …). In evolutionary terms, all appear to be some sort of heuristic approximation to some sort of local optima to a particular set of strategic contexts. When viewed that way, one can start to extract the strategic lessons, without becoming mired in any of the “explanatory framework”/”stories” baggage necessarily associated, and invariably suboptimal in a modern context (though there are dangers in overdoing that particular line of thought also).

re Cynefin more directly (sort of indirectly directly).

This is recursive (deeply so), so not sure how this will land, and I don’t have time to be any more explicit than this – so I’m trying this approach in this context.

First thing to keep constantly present, it is now beyond any shadow of reasonable doubt that all of our experience is of a subconsciously generated model of reality. We have no other possible experience. We never get to experience reality directly. And the more conscious we become of the many levels of heuristics embodied in that model by biological and cultural evolution then the more possibilities will be available to us in our interpretation of experience. This approach is not free of cost.
It leads very rapidly to multiple interpretations with very similar probabilities, and reality often has time limited demands for action, so we need some sort of methodology to choose one from an almost infinite branching set of possibilities in contexts where action is demanded.

And having explored such landscapes (often in some approximation of a Monte Carlo Markov Chain set of simulations) then one is capable of crossing what to others seem to be impossible heights into new territories, then building roads back.

Coming back directly to Cynefin and the cliff.

The issue of falling off the cliff, is that if one is using a model of what one takes to be a simple context, and is using some set of rules to manipulate that simple context, but the context is not simple, nor even complicated, but is actually deeply complex and contains many areas of embedded chaos; then the simple rules one is using can cause the entire system one is embodied within to fall over the cliff from order into unrecoverable chaos.

In a very real sense, the development of any social rule based system to universal deployment necessarily does this. In a sense, it is the strategic reason for the collapse of all historic civilisations. People demand simple rules, because they feel comfortable with simplicity, we all do in a sense, it is where we necessarily start as children, so it is where our neural networks have their “home”. But reality is not simple, not even complicated, it is both complex and chaotic – and that is too incomfortable for many – so they retreat at various levels to the comfort of some level of rules, and attempt to find security there.

But there is not, nor can there ever be, any firm security in any set of rules. Yet we all need operative heuristics.

Reality does in fact seem to be sufficiently complex that we must always be probing and testing the systems we find ourselves in, always alert for chaotic systems, for cancerous tumours on the cooperative that is human society, for threats to individual life and individual liberty; and there can be nothing about that which is simple, ever – however useful any set of heuristics have been in any historical context.

The idea that individuals can exist in any meaningful way outside of a social context is a nonsense.

It can never be a matter of individuals or societies.
It must always be be a very complex set of negotiated boundaries of individuals in societies; and both individuals and societies are very complex systems with very different sets of needs in any particular context.

And that notion recurses through any level of abstraction one is able to attain.

Self ownership is essential; and one must also always acknowledge the social/cultural role in building an maintaining self, and the role of self in building and maintaining culture.
There is no escape.

Both aspects of individuality are real.

A newborn child, without the input of culture, cannot survive.

And our individuality is essential to the survival and growth of culture.

It is seriously complex, and it is eternally beyond any set of codified rules as being anything more than a set of historically useful heuristics (that may, or may not, be useful in the current context).

If one can honour self, and honour culture, in this sort of a context, yet not be a slave to either; then both can evolve and survive. Both do in fact seem to be required.

[followed by – Dirk – nature of rights – Positive Rights vs. Negative Rights – Learn Liberty]

Hi Dirk

The idea of a single human having a right makes no logical sense at all, not even within the definitions you pointed to.

A single human on a desert island has no rights, they only have capabilities and materials.

The video expresses negative rights within a context of trade, with the implicit assumption that the individual has something of value to trade. What happens when that fails to be true?

What happens when automated systems can supply anything at a lesser cost than the cost of creating or maintaining an adult human?
That is a seriously non-trivial question.
On current trends we will reach that point at about 2032.

The idea of value in trade, value in exchange, was a useful proxy for value more generally when most things were genuinely scarce. It was therefore also a useful proxy for liberty.
It fails completely when fully automated systems reduce scarcity to zero for an exponentially expanding set of goods and services.

Look deeply at the notions of value in that video – they are all implicitly dependent on the unquestioned assumption of trade, and the scarcity implicit in the notion of trade, and the value that individual labour has in such a context.
What happens when that is no longer the case?

That is already no longer the case in respect of many classes of information.

What do we use as a proxy for value when scarcity no longer dominates?

That is not a trivial question!!!

{I say one must adopt individual sapient life and liberty as prime values, human and non-human, biological and non-biological. Nothing else I have investigated instantiates significant survival probabilities in the long term.}

I repeat, one cannot have a negative right, without a positive one.
Just because one accepts the positive one as an implicit assumption (trade) doesn’t mean that it doesn’t exist!

[followed by]

Hi Dirk and Pawel

@Pawel
Yes – certainly, much of scarcity present today is artificial, but not all. There is still a substantial amount of real scarcity, particularly in respect of energy and complex materials.
In terms of information, then certainly, almost all scarcity is artificial, and most of it for the purpose of creating and maintaining market value.
Marketing certainly is an aspect of maintaining scarcity, but it has very little to do with the exponential expansion of goods and services – that is mostly down to various aspects of automated systems at many different levels of production and distribution and maintenance.

@Dirk
Marketing is interesting, and like most modern things, complex.
Marketing companies are not generally interested unless the gap between production costs and market value are at least a factor of 10, and most prefer a factor of 100 (personal communications with the leader of a top marketing company in NZ).
That has many profound consequences, and small incremental improvements cannot be marketed; it takes a real “step change” at some level to achieve market penetration via standard means.

Yes there are limits to material and energy usage, and they are substantial; sufficient to allow all individuals on the planet to enjoy what most of us would consider a high reasonable standard of living even on only half the available energy budget (ie 1/4 of incoming solar energy, meaning 1/4 for asymmetric distribution and that is leaving half the solar energy for non-human biological systems).

RE “right to be left alone” is something I sort of agree with to a degree.
Certainly, one needs to be free from undue coercion.
And being a member of a social group comes with implications. Simple little things, like which side of the road to drive on – here in NZ the convention is stay left. In the USA it is stay right. I am such a strongly intuitive driver, I haven’t been willing to risk driving on any of my visits to the USA, because of the risk of my subconscious systems reacting inappropriately.
Our lives in reality have thousands of such things in them, some we find so trivially easy to live with we don’t even notice them, others are much more noticeable.

Certainly, we need to avoid the risks associated with definition by social grouping, or control by central agency (like communism); and there is need for social systems, and social agreements. So one can never be left entirely alone, and one can approximate it to useful degrees, which is sometimes useful, and sometimes dangerous.
Hence the continuing spectrum of opinions on where to draw the line between person privacy and individual freedom vs detecting and preventing groups bent on causing risk to life and liberty. No hard and fast rules possible there, just sets of situationally historically useful heuristics.

How to imagine no scarcity? Easy, just look at the progress that has been made on fully automated systems over the last 60 years.
Looking at those trends, then somewhere between 2030 and 2040 (95% confidence) probably around 2032 (50% confidence) we will have the capacity to fully automate all known systems – production, design, delivery, maintenance. At that point, within energy and mass budgets that are substantial by the standards of 99% of people now living, there need be no scarcity of any known good or serive. Certainly, some things are limited (some elements like gold for example) but most such things can be substituted in practice such that most people wouldn’t notice the difference. There is enough of everything to meet the reasonable needs of everyone; and we will have the systems available to create and maintain it without any person having to do any more about it (having already done the work to create the systems). Lots of very smart geeks working on such things. Have met a number of them at various international conferences I have attended on the subject; even if they’re not so great on much of the big picture stuff, they’re great at the micro and nano level stuff.

So there would still be (strictly speaking) scarcity, in that there would be limits on materials and energy; but most people would never approach such limits in practice. So from a purely practical perspective, there would be no scarcity – in the experiential sense.

“Why should there be humans out there” … “who don’t trade or offer something to trade?”
Because those people might simply want to pursue their own interests, within the reasonable limits of not posing undue risk to the life or liberty of anyone else.
I already have quite a few projects I would like to do; most of which take years of work, even with very able robotic assistance.
Why should I be forced by some social convention to predicate my survival upon trade with others???
What gives you the right to impose that upon me? 😉

Just because for most people it is an unexamined assumption, doesn’t make it any less of an imposition.

Where you say “Machines will stay service machines for humans” you are not aligned with most of the leaders in AI.
Already – today, no human can beat an AI in any defined game. Sure, we use far less energy than the current generation of machines, and their energy use per computation is roughly halving every year (has been for 100 years).
So things are changing in fundamental ways; and some serious tipping points are rapidly approaching.

Re Szasz, and his double negative comment – he was a libertarian, but there are deep failures in libertarian philosophy, as much as I find many aspects of it attractive.
The deepest problems lie in the 2 rules for society “no force and no fraud”; and they exist in these 3 questions:
Who defines force, and how is it defined?
Who defines fraud, and how is it defined?
How is the no force rule enforced, and by whom, and in what contexts?

In complex systems, all rule sets fail. That is one of the fundamental outcomes of complexity theory. One needs a strategic approach (probe, sense, respond; amplify desired responses, dampen down the unwanted; repeat). No certainty anywhere in that, just the eternal dance.

Complex systems demand acceptance of eternal uncertainty.
Most humans do not yet appear to be ready to accept that.

[followed by link to Suzuki interview – 37:30 for 2 minutes]

Hi Pawel,

This is where we probably agree to disagree,

It isn’t madness.

We need knowledge and technology.

What is madness is expecting that markets will work with that technology in the ways in which they did in the past.

Most of the issue we have today are not the result of technology, but of the incentive structures in place as to how we use that technology. The incentive structures of markets were never a great fit to the needs of humanity, but they are becoming less so as time goes on. At least in this sense, I agree with David Suzuki, but I don’t think that is the sense he is actually talking about in that video.

I am not interested in taking human existence back to short brutal lives dictated by immediate survival needs imposed by environmental variables beyond control.

I am interested in using technology to empower individuals to act responsibly in social and ecological contexts. And that demands a degree of equity that is profoundly absent from today’s systems.

So sure – there are issues – deep issues.

And I am not the sort of techno pessimist that Suzuki is.

I am more of a cautious techno-optimist, but I am stating the risks that I see as clearly as possible (given their profound complexity) and the sorts of classes of risk mitigation strategies that seem to me to be essential for survival.

I think we have a better than even chance of getting through this and surviving, but not if we stay attached to the ideas of markets and money that have arguable served us reasonable well over the last few centuries.

So certainly, a non-trivial problem space, and one giving us opportunities to survive – if we are willing to look past our prejudices.

[followed by different subthread on ]

The biggest lie is that we are rational.

We are not, and cannot possibly be.

The world is far too big, too complex, to deal with rationally in any real amount of time, so we must have heuristics. Many levels of deeply buried heuristics in all of our perceptual and modelling and sense making systems.

As children we are taught many simple binary heuristics, and our neural networks get strongly attracted to them – things like right/wrong, true/false, good/bad.
Does that make any of it accurate? No.
Are they useful in ordinary circumstances? Yes, most certainly that.

As adults, we can learn to question, learn about complexity, about the deep evolutionary power of trust and cooperation, if we get the opportunity, or we can be blinded by any number of parasitic belief structures that discourage questioning – economics, religions, nationalism, socialism, pretty much any ‘ism you can think of.

And each of them will be a simplistic approximation to something vastly more complex and ever changing.

Perhaps the biggest lie is the simplistic idea of freedom.

What can freedom possibly mean?

It cannot be an absence of consequence, that is insanity. Actions have consequences, even if they are unknowable, even if they are partially random in nature.

It cannot be an absence of structure or boundaries. All form requires boundaries. It is boundaries that prevent everything being a great uniform sameness.
Our existence is predicated on at least 15 levels of boundaries.

So freedom must acknowledge the need for many levels of structure, many levels of boundaries, and all the responsibilities implied in such a realisation; and then it can become the ability to influence creatively within the realm of the possible, and be an agent among many in the realm of the survivable that results.

And there can be no absolute certainty in any of this. Reality seems to contain fundamental uncertainty of many different types.
All there can be is reasonable probability.
All any of us really have is our best guess, based upon uncertain heuristics and limited data-sets.

What does coercion look like in such a game space?
What else might it be confused with?

That seems to be the game available.
Might as well play it.

[followed by different subthread – Sean Michael Henderson If x has a right to self preservation, whose duty is it to provide it?
Answer: it is x’s and only x’s duty.
….
PK How are we motivated to live? is it a kind of coercion?
SMH Instinct? Survival of the species? A commandment by God?]

Human beings cannot survive alone.

Our existence is predicated on cooperation within groups.
It takes several years to get an infant to the stage that it can survive alone, even in the most hospitable of contexts.

None of these tools we are using to communicate (the language, the concepts, the symbolic systems, the computers, the power grids, the communication networks, etc) can be created by an individual alone – they are all the result of a vast cooperative effort across many generations.

Certainly, we are the result of the survival of many levels of cooperative entities over vast times, and by definition the ones that survived produced us, so in that sense there are embedded heuristic patterns sorted by survival to survive.

Rights are a very recent set of heuristics, based on very simple models of the complexity that is actually present. They are useful heuristics in some contexts. The really interesting question is, what are they useful approximations to?

To me, it seems beyond any shadow of reasonable doubt that they approximate sets of high level boundaries required for the survival of cooperative and diverse complex entities exploring infinite unknowns that contain many sets of unknowables, and many hidden dangers. And it seems true that not exploring is even more dangerous than exploring (in the long term).

So nothing simple, and some very deep lessons, if one has the time and inclination to do the searching and testing.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Our Future, understanding and tagged , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s