2. Strategies for Longevity

What are the institutions we require, if we want to have a reasonable chance of living a very long time?

At one level it is all about risk mitigation strategies.

So what are the major risks?

There are many ways of viewing and categorising risks.

They fall into many different classes.

There are many that are high probability of occurrence, but the individual likelihood of danger is actually quite low, right through the spectrum to very rare events that are very dangerous indeed, and everything in between.

A lot of risks are additive. Each seen alone and discretely seem quite harmless, but in combination with lots of similar things, all add up to something quite dangerous. Diet is one of the greatest of these, the use of certain classes of chemicals in manufacturing and pest control are other major classes.

There are many sorts of risk:
Global pandemic;
War;
Pollution;
Ecological collapse;
Large scale volcanism;
Meteor or comet strike;
Other cosmological risk (supernova, small black hole, etc);
Technology failure (grey goo, unintentional supervirus etc).

Yet strangely, it seems that our own ignorance seems to be the single greatest risk.

We didn’t get born with a user manual describing how we work and what the optimum conditions for operation are.

The old adage – “Know thyself, and to thine own self be true” is as important today as ever; and today we have so many more levels of understanding possible, and necessary.

It seems that we are each (along with every other living thing on the planet) the result of a process of evolution by natural selection that has been going on for close to 4 billion years.

Darwin made a major contribution to understanding that process with his publication of “On the Origin of Species” in November 1859. At that time few people had much idea about how old the earth was, most still thought in terms of thousands of years, and few had any idea what a billion years might be like, other than simply a very long time.

Since then our understanding has progressed so much, and continues to progress.

Darwin understood that many characteristics are inherited, but had no idea how.
He also understood that there was variation between parent and child, but had no idea of the vast array of possible mechanisms for variation, some of which are gradual, some of which are not, or the way in which evolution can fold back upon itself and influence the rate of variation in certain parts of the genome and under certain conditions.

Darwin understood the role of competition in evolution, but had little concept of the importance of cooperation. For a variety of reasons, some related to the politics of control and domination, and some much more simply related to how our brains have evolved to direct our attention towards possible threats, and some simply related to historical scarcity and the evolution of cultures, the role of competition has been the major focus of the common understanding of evolution, and the role of cooperation is virtually unknown.

Since Darwin’s time, much has been learned. Richard Dawkins 1976 classic “The Selfish Gene” is probably still the best single introduction to a modern understanding of evolution.

We now understand many of the major mechanisms of inheritance and variation at the level of chemistry.

We are also building our understanding of those mechanisms at the levels of algorithm (process) and strategy.
In this sense, the work of Robert Axelrod and others gave us our first deep insights into the role of cooperation in the many levels of living systems, work vastly extended by Stephen Wolfram and others. Many others, like John Maynard Smith and Elinor Ostrom have contributed fundamentally to our understanding of the complexity and stability of cooperative systems, and the sorts of attendant strategies required to prevent cooperatives being overrun by cheats.

It is now clear that, provided there is sufficient abundance for all, human beings can be accurately described as a fundamentally cooperative species.
Certainly, if we run into genuine scarcity, we can be very competitive, and we now have the technology to deliver abundance of all the essentials of life to every human being, but we still carry in our culture a market based economic system that has evolved in, and is optimised for, times of genuine scarcity. It is now fully established in psychology and logic that rewards and competition are only useful at improving output of repetitive tasks, and actually inhibit creativity in problem solving – which flourishes in a more supportive and cooperative atmosphere, provided that diversity is tolerated and valued rather than conformity to social norms of behaviour or belief or rules or hierarchy are being enforced.

So it is now clear to me, that the single greatest threat to humanity comes from the dominant cultural paradigms of money and markets, which are a competitive and scarcity based system, where anything that is truly universally abundant (like oxygen in the air – arguably the single most important thing to any human being) has zero monetary value.

The logical follow on from that is that there must be actual scarcity for some fraction of the population before anything has an actual economic value.

That leads on to there being no economic value in delivering universal abundance of anything.

Which is another way of saying that poverty is a structural necessity of any market based system.

Without unmet demand, there can be no market value.

So we have this cultural way of valuing things, called money, and it has this property of demanding that some significant fraction of the population experience lack, for no other reason than to sustain the market based system of distribution (be it capitalist or communist or any other “ist”, if it uses money, the internal incentives are identical).

It seems that cultures and understanding evolve in a similar way to biological life, and just like biological systems there are many ways that ideas can change and spread, and that new ways of thinking can come to dominate the ways in which we interact with each other.

It seems that a more accurate name for our species would be Homo narrans – the story telling man.

It seems that the stories we tell ourselves, and each other are the biggest single determinant of who we get to be in the world.

It seems that all cultures have developed stories about how we came to be, and what the point of it all might be.

Francis Bacon is often called the father of modern science, for the simple mechanism of doing experiments to see what actually worked.

Modern science is a process.
The process is to pose questions, design experiments, and see which explanations fail to account for all the measurements from the experiment, and discard those ones, and repeat.

After many generations of many people repeating that process, we have developed some very powerful ways of thinking about what we are, and many of them are quite abstract.

One of the ideas that has had to be discarded as a result of the process is the very notion of “Truth” in respect of anything in reality.

It now seems clear, beyond any reasonable doubt, that we have no direct access to reality.

It now seems clear that what we get to experience is not reality, but a model of reality that the subconscious processes of our brains assemble. The way in which the model is put together is partly influenced by the genetic processes that influence the basic structure of our brains, partly influenced by specific processes of our development within the womb, and partly influenced by physical and cultural experiences of our individual lives. All of these things contribute important aspects to who and what we become. And there is one more component that is extremely influential – us and our choices (to the degree that we make choices, and don’t simply follow rules or habits).

It seems that we have many aspects to who and what we are, and one of those aspects seems to be the software entity inhabiting the software model of reality created by brain, our self awareness.

It seems that this self awareness is capable of making choice, and through those choices influencing all aspects of the otherwise automatic functions of body in space.

This software seems to be running on a quite extraordinary piece of hardware, a squishy brain, made up of a vast collection of interacting systems at many different levels, chemical systems, cellular systems, collections of cells into functionally distinct organs and networks of communication at several different levels. It seems that both the neural networks and the types of memory systems add fundamental and critical aspects to who and what we are, and can be.

It seems to be, that we are very much influenced by the sorts of stories we tell ourselves, and the sorts of habits that are reinforced or dampened down by these stories (including habits of context recognition and questioning).

It seems that there is little about any human being that is certain, and many things that are more probable than others in particular sets of circumstances and under the influence of particular types of stories.

When one looks at the realms of all things that might be learned, it seems to be an infinite set.

No human being has yet had infinite time to exist, so we must each necessarily be infinitely more ignorant than we are knowledgeable (one of the necessary outcomes of understanding infinity). So, however much we think we know, it doesn’t pay to get too confident of it, because in all probability, much of it will just be simplistic assumptions that have worked to date, but could fail sometime soon under circumstances we haven’t encountered before.

Strangely enough, this simple logical fact is one of the strongest drivers towards high level cooperation between entities exploring novel domain spaces. It makes the whole process much safer and more interesting for all (with everyone watching everyone else’s back, and being willing to pull them out of a situation if it starts to look really dangerous).

One of the key aspects to understanding why the sorts of belief structures we see in cultures over time persist is complexity.

Some aspects of the world are dangerous.
It is useful for evolution to invest time and energy in systems that can detect and avoid those dangers, if there are such reliable mechanisms available.
So as humans we have a strong bias to seeing and avoiding danger, rather than seeing and exploiting opportunities.

Unfortunately, there are many different sorts of complexity.
Some problems are complex, and they do have answers that can be calculated or approximated with useful degrees of precision in usefully short amounts of time.

Some problems are simple enough that we can work out ahead of time what sort of problem they are, and how much time it is likely to take to come up with a useful answer, and some are not.

The group of problems that one cannot work out ahead of time how much time is worth putting into them fall into two different groups:
1. those that do have solutions but we can’t work out ahead of time how long it will take to work out what they are; and
2. those that do not have solutions and one can just keep on calculating forever.

Between them these two types of problem give rise to what is known in complexity theory and computing as “the halting problem“. How does one decide when to stop thinking about or working on a really complex issue, or how long to continue on a difficult problem?

Evolution ran into that issue a long time before we did.
It seems to have come up with quite a range of different solutions all of which work in practice and all of which come with some costs in terms of loss of potential benefits, as well as the benefit of not starving to death or otherwise dying while giving all of one’s attention to a particularly difficult problem.

One of the classes of solution to that problem goes under the title of beliefs or faith.
Once one adopts a belief, or a faith about anything (a scientific question, a religious question, an economic question, any sort of question at all), then one can stop asking that sort of question, and simply use the heuristics (rules of thumb that have proven useful in the past) given by that specific faith or belief as guides to action in the present.

We all do it at many different levels.
We have to, reality is far too complex for us to question everything at once, so we have to have trusted habits (heuristics) at every different level of existence.

And with changes in circumstances, old heuristics (practical rules) that used to work, can in fact fail in ways that are really difficult to see (because of the sets of heuristics {and habits or models} we have in place).

One of the beliefs that has been around a long time is the idea of Truth (and more on that elsewhere).

It seems that our ability to manufacture things, and to automate processes, has now grown to a level where it is actually changing things beyond the ability of our old ways of thinking and being to cope.

Money and markets were very useful in our past where scarcity was very real.
Now that we can automate the capture of energy from the sun, and use it to power the automation of the production of a large and growing set of goods and services, the old heuristics of money and exchange are no longer serving us well.

It is now clear that shifting to a new level of cooperation, that includes every other sapient (intelligent) entity (human and non-human, biological and non-biological {Artificial General Intelligence}) is actually the safest and most beneficial strategy for everyone to adopt.

And Axelrod demonstrated that all cooperative strategies require attendant strategies to prevent cheats (at every level of awareness and abstraction and operation) from invading and destroying the cooperative. We already have a vast collection of such strategies (many of them beautifully curated by Lin Ostrom and associates), and Stephen Wolfram has opened the door to an infinite set of classes of other strategies that can also be used effectively. We have no shortage of choice.

Complexity theory and network theory have some very interesting insights into the sorts of network structures that promote rapid transmission of information through complex and congested networks, and mimetics gives us strategies for structure and function in the design and implementation of key transitional strategies.

And once we reach a state where we have fully self maintaining automated engineering capacity at our disposal, then effective risk mitigation technologies to all of the major risks identified at the start of this page are actually relatively interesting exercises in engineering that lots of people interested in engineering really enjoy doing (if they have the tools {including material and energy} to do them).

1 Response to 2. Strategies for Longevity

  1. Pingback: Miri – research agenda overview | Ted Howard NZ's Blog

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s