Talk to Zero Carbon Select Committee

16 Aug 2019 – 14:40 – just got home from a trip to Christchurch and back – to talk to the select committee considering the Zero Carbon Bill – a response to climate change concerns

It was weird.
Driving down, one of the recurring thoughts was – here is a system requiring me to expend 30Kgs of carbon to the atmosphere just to speak to them for 5 minutes – when they could have set up hearings via video link, at zero carbon emissions.

It was just one more example of how systems self perpetuate – particularly bureaucratic systems like the systems of government.
There is a conservative sense in which that was quite appropriate in times when things changed quite slowly.
We are no longer living in such times.

Prior to going down, I wrote up what I intended to say; practiced it often.
Here is what I wrote up:

“I don’t know how many of you know the Two Ronnies Hardware sketch with the rubber 0s. It is a sketch of repeated misunderstandings. One Ronny asks for something, the other gets the wrong thing, over and over again.

In every case it is due to over simplification, too little information.

The entire climate debate is like that.

It is extremely complex.

Several things are now beyond any shadow of reasonable doubt.

That there is anthropogenic change to the atmosphere from the burning of fossil fuels that poses risks of runaway positive feedbacks that could cause massive changes in sea level and temperature is real and has been clear for decades.

And other things are also real.

If we hadn’t been burning trees and fossil fuels for 10,000 years then rather than facing warming now, we would be headed back into the next ice age, with falling sea levels and populations starving from crop failures.

Modern neuroscience and modern Artificial Intelligence research, combined with a modern understanding of evolution and complexity theory is giving us the beginnings of an understanding of just how complex we are, and how complex the world we live in is; and what this thing we call experience is.

None of us experience reality as it is.

Our experience is a vastly simplified subconsciously created model of reality, a kind of predictive virtual reality, often deeply influenced by our genetic and cultural past.

To start to get an appreciation for the sorts of complexity that exist beyond our experience, we have to get that what we experience is always a simplification of something vastly more complex.

This happens at every level (however many levels we manage to get some beginnings of an awareness of).

We have the illusion that money represents a useful metric to measure value and resources; but it isn’t. It arguably was in our past, but automation has fundamentally changed that. And that fact brings many levels of complexity and danger and opportunity.

Money and markets are now posing exponentially increasing risk.

Nowhere is this more obvious than in the climate debate.

As someone who has been interested in the nature of systems and in optimising the probabilities for our survival and freedom and responsible choices for over 50 years; there is very little of that complexity that I can communicate in 5 minutes.

What I can say with confidence is that the only approaches that offer significant probability of survival involve investing in technology and understanding, respecting the deep lessons from the past without being bound by them, and in cooperatively exploring the infinite possibilities available to the future.

Risk is eternal.

What we can do is develop mitigation and resilience strategies (all levels).

The levels of freedom and security we all accept as basic are fundamentally based in abundant available energy.

Our economic system of markets and money is based in scarcity, in exchange and trade. And certainly, at some levels, that exchange and trade has helped to ensure cooperation, but not at all levels.

We need to accept the lessons from evolutionary theory, that complexity of the sort we embody is always fundamentally based in cooperation. Competitive environments always force systems to some set of minima on the complexity landscape, and work against what any of us would reasonably call freedom and a life worth living.

The ideas that our security is based in markets and that evolution is all about competition are perhaps the greatest of the many lies and misunderstandings common in our time.

The climate issue is not that difficult to solve technically, but it does demand of us that we see the dangers present in many of the ways that served our ancestors well, that are not suitable in an age where solar energy can be universally abundant. It demands change.
Scarcity and competition must give way to abundance and cooperation. If there is one deep lesson from biology, this is it.

The cells of our body don’t compete to keep us alive. We have a name for cells that go selfish and competitive, it is called cancer. I have survived a terminal cancer diagnosis, by changing diet. If we want to continue living on this earth we must do so cooperatively. We must change from our market-based diet.

We must be real.

We must accept uncertainty in all things.

We must all accept that what we once thought of as unshakable knowledge is but some level of useful approximation in some context from our past that may be changing in ways that we are blind to.

We must all be kaitiaki.

We must all respect and accept diversity.

We must accept that the only way to minimise risk is to accept a certain amount of it as our eternal companion.

We have to solve the climate issues, and the most effective way to do that is through cooperation and investment in science and sustainable technology and through meeting the reasonable needs of every person on the planet.

Nothing less than this has any reasonable probability of success.

This bill does not do that.

In the event, I didn’t say that.

I listened to the two speakers before me.

I didn’t record what I said, and it was now some 4 hours ago, and while it got no response at all from the politicians present, it got a round of applause from the assembled submitters.

What I said went something like this:

I share many of the concerns of the previous two submitters.

I come here to speak on my own behalf, though I wear many hats in our community.

I am concerned that this Bill does not address the real issues.

Just going to zero carbon isn’t enough, even if this bill achieved it, which it wont.

The problems we face are far deeper. They come from the very notion of measuring value in markets. Up until reasonably recently one could make a reasonable case that market value was a reasonable approximation to real value, but not any longer.

If you doubt that just think of air. It has no value in a market, but think of putting a plastic bag over your head and you will very quickly realise how valuable it really is.

Up until quite recently there weren’t many things that were as abundant as air, but with automation and robotics that is no longer the case.

Planting trees to sink carbon is not a solution. There isn’t enough land on the planet for that solution to work for everyone.

We need to be able to manage climate.

We need high tech solutions.

Throughout history real breakthroughs in science and technology have come about by government investment, not by private companies responding to incentives.

There is just too much profit in oil to ever remove it by economic means.

If we hadn’t been burning forests and coal for the last 10,000 years, then we wouldn’t be here talking about global warming, but rather about the famines caused by encroaching ice, as we would by now have been well into the next ice age.

We cannot afford as a society to be subject to the vagaries of climate change, natural or otherwise. We need to get serious technology into L1, the stable gravitational orbit between the earth and the sun, and to manage the amount of radiation reaching the earth. We need to maintain a stable climate.

The other major problem we face is the overly simplistic notion of evolution.
If people think of evolution at all, most think of competition, nature red in tooth and claw. But that isn’t evolution.

For complex organisms like us, evolution is much more about cooperation.

The cells of our body don’t compete.

We are each made of about 10,000 times as many cells as there are people on earth.
Those cells cooperate to make us what we are.

We have a name for cells that cease to be cooperative and become competitive, we call it cancer.

Most of the finance industry has become a cancer on the body of society.

I was given a terminal cancer diagnosis about 9 years ago.
My society gave up on me, and sent me home “palliative care only”.
I beat it by a radical change of diet.

We need to change our diet as a society.

The Kaikoura earthquake was an interesting lesson.

I was probably the only person fully prepared for it. I had solar power, 4 months of water supply, 3 months of food supply. Next morning I dug a long drop, put a tent over it, plugged in the generator, and we had our coffee as per normal.

Nobody else was prepared.

Our community survived by the cooperation of many other people.

We didn’t compete our way out of it.

It was cooperation that led the way.

New Zealand cannot solve this problem by competitive market measures.

It is only possible to solve this problem by cooperative international efforts using high technology to meet the reasonable needs of everyone on the planet.

We must get past competition.

We must get past scarcity.

The technology is relatively easy.

The changes in thinking are the hard bit.

Posted in Ideas, Nature, Our Future, Politics, Technology, understanding | Tagged , , , , , , , , , | Leave a comment

Quora – ethical consequences of immortality

Quora – What are the ethical consequences of immortality?

Don’t agree with either of the other answers.

It very much depends on what you mean by ethical?

For me, my chosen highest values are individual life and individual liberty, and that rapidly gets very complex.

Having individual life as value 1 means minimising risk to life – mine and everyone else’s.

Having liberty as value 2 means constraining my actions within the set of actions that minimise the risk to the life or liberty of everyone else.

If liberty is to have any real meaning, then it must be within interesting contexts.

Freedom to roam a sterile room isn’t nearly as interesting as roaming a world full of diverse social and ecological systems.

So part of maximising liberty is maximising the choices of contexts available (within the set that doesn’t pose undue risk to life or liberty).

All systems require boundaries to give them form. The more complex the system the more complex the boundary (as a general rule).

As we are the most complex thing we yet know of in this universe, our liberty must be within the sets of boundaries required for our existence. And that gets very complex with multiple overlapping sets of entities being present.

So for me – ethical means that set of actions that delivers on the values one has.

The classical ideas of right and wrong, good and bad, seem to be very low resolution models of systems that are (for the most part) vastly more complex than that. And some things do resolve down to reasonably close approximations. Killing people = bad. Restricting freedom without very sound reason = bad. Acting without due consideration of probable long term consequences = bad. Helping others in need = good. Accepting and respecting diversity = good. etc

And to be very clear, the idea of certain immortality is not a logical one.

The best one can hope for is indefinite life extension, as there is always a finite probability of death from things beyond the known. One cannot mitigate the risk of things we don’t know, and don’t know that we don’t know.

And for the most part, the historical record seems to indicate that such things usually occur at reasonably low frequencies, and should not be a major threat. And one can never be entirely certain about such things. The universe is a big and dangerous place for entities like ourselves.

And I am all for continuing to exist for as long as I possibly can, with expanding capabilities as I do so.

My understanding of evolutionary theory and games theory indicates clearly to me that the greatest probability of such continued existence is delivered by being as cooperative as possible with other cooperative agents (and not all agents are necessarily cooperative).

Posted in Longevity | Tagged , , , , , , , , , | Leave a comment

Comment to The Portal 001 – Eric Weinstein and Peter Thiel

You Tube – Peter Thiel on “The Portal”, Episode #001: An Era of Stagnation & Universal Institutional Failure.

Technologist and Investor Peter Thiel is the guest as he joins Eric Weinstein in the studio

Some really great stuff, and much that seems to me to be lost in various sets of framing failures.
And perhaps, it is just as Eric has said elsewhere, that the depth of the issues is so great that even in 3 hours one can barely scratch the surface – and I have tried multiple interpretations, there does seem to actually be errors present.

I align strongly with Peter’s concerns on violence, but not with the framing set he is currently using – it was useful but seems close to failure in our current context.

For the sake of clarity I will very briefly mention the major meta issues of “framing” that I see causing these two exceptionally intelligent individuals that I greatly respect to go down paths that seem to me to contain very high risk of violence and existential level risk.

So what are the “Framing issues”? :
1/ Experience as models
2/ Cognition within models
3/ Systems and boundaries
4/ Freedom and diversity
5/ Evolution as cooperation
6/ Hidden preference in the games theoretic context of a tournament species
7/ Paths/trails and levels of risk

My context is that in 1974 as I completed undergrad biochem it became clear that viewed from a cellular perspective, every cell alive today would consider itself to be the original cell; so indefinite life is the cellular default, and age related cellular senescence that our somatic cells experience is some set of genetic overlays on that basic theme. Therefore indefinite life extension is possible. I just don’t yet know how to do it, and I do know it is doable.

So in that context, my major focus over the last 45 years has been exploring as many domains as possible for strategies that deliver to potentially very long lived individuals a reasonable probability of living a very long time with reasonable degrees of freedom.

The only classes of solutions to that problem space that seem to me to have reasonable long term probabilities associated with them are those that can be universally applied.

So back to a small expansion of the Framing issues: given this context, and then on to some explicit problems with them in the conversation between Peter and Eric. And to be explicit, this set of “Frames” is a very tiny subset of the sets I use, and they seem to be critical to understanding major risks present in the directions of strategic thought discussed and implied by both Peter and Eric.

1/ Experience as models

It seems that all of our conscious level experience is of a model of reality created by various levels of subconscious processes. The “resolution” of that model (and the resolution of the sets of responses that occur to consciousness) can vary greatly (many orders of magnitude) in their complexity depending on various levels of context. Several of those “levels of context” are chemically mediated. And it gets even more complex as additional levels of higher level abstraction are added to the mix.

2/ Cognition within models

We never get to deal with reality directly. All we ever get is some level of simplified model. It is therefore not surprising that levels of simple abstraction can be usefully applied to levels of model and give useful outcomes. That doesn’t necessarily mean that those models work in reality, however well they work in our personal experiential “model” of reality. And that can be a highly recursive and difficult concept to become intuitively familiar with.

3/ Systems and boundaries

All levels of systems require boundaries. Sometimes a simple gradient can be enough of a boundary, and sometimes something more is required. A cell wall is much more complex than a simple gradient. A cell wall is selectively permeable, allowing some things to come and go with ease, blocking others, and actively transporting others. A cell wall can be very responsive to context. The more complex the system, the more complex the boundaries need to be. Simple hard boundaries tend to drive systems to simplicity, and to remove complexity. Hard boundaries tend to become brittle and break in some contexts – causing catastrophic failure.
All levels of structure (subatomic,atomic, molecular, cellular, organs, individuals, tribes, populations, ecosystems and all levels of abstract structure – self, society, culture, intellectual paradigms and levels etc) have minimum levels of boundaries required for their continued existence. But attempts to over simplify boundaries that must be complex introduces existential level risk into systems.

4/ Freedom and diversity

In the context of boundaries above, freedom cannot be absolute, but must be constrained within the contexts that allow for existence and diversity. Diversity is the necessary outcome of phenotypic freedom in a very real sense. If one has individual life, and individual liberty as values, then one must accept indefinite expansion of diversity, even as one admits of limits on diversity of particular types in particular contexts that produce unacceptable levels of existential risk.

5/ Evolution as cooperation

Few people understand evolution in a strategic context.
If most people think of evolution at all, it is usually some version of competition, some version of “nature red in tooth and claw”.
One can certainly find many examples of such competitive evolution, but what few appreciate is that such competitive environments drive systems to some set of minima on the “complexity landscape”.
The emergence of complexity is always predicated on new levels of cooperation.
In this sense, true freedom is necessarily predicated on cooperation, as it is only in cooperative environments that real diversity can emerge.
And of course raw cooperation is vulnerable to exploitation, so at every level there emerges something of an ecosystem of “cheat detection and mitigation” strategies (as various levels of evolution occur between cheating strategies and their detection). Evolution can get very complex very quickly, and we as self aware individuals are the result of at least 15 levels of complex systems (not 15 systems, 15 levels of systems).
The other aspect of this is the contexts that support the emergence of new levels of cooperation. It is only when the threat from external factors exceeds the threat from others within the population, and there is some level of cooperative activity that can mitigate such risk, that new levels of cooperation can emerge. We are in such a context. Climate change is but one of the smaller risks.
As soon as one extends individual life to some reasonable approximation of a very long time, then the levels of personal risk go way up, and cooperation is essentially the only game in town.

6/ Hidden preference in the games theoretic context of a tournament species

When one gets to the games theory level of tournament species, then it is clear that there are strong reasons for conscious agents not to be consciously aware of their deep preferences or limits; because to be so consciously aware exposes the risk of revealing them to opponents, who would then always win. So the idea of hidden preferences can have many levels of strategy behind it, and need not be at all simple.

7/ Paths/trails and levels of risk

Anyone who spends any time in nature will notice “game trails”. These trails are present because they lower the everyday risks that are present. If you are making your own trails, then there is risk of minor injury, and multiple minor injuries can add up to major vulnerability to predators or starvation. So there is always a complex balance, over deep evolutionary time, between reducing risk by staying with the herd, and exploiting opportunity by exploring new territory and finding new rewards. That is true at every level of strategy. It is present in many levels of biology and culture, and is something Jordan Peterson speaks well about. Every level of system needs both conservative and liberal elements. The greater the levels of communication and cooperation between those elements, the greater the security of all involved. And there can be no simple boundaries on something with so many infinite dimensions – all systems are necessarily low resolution models.

Returning to the discussion between Peter and Eric directly:

At about 1:08:00 into the discussion the subject of power laws comes up. I agree with Peter that not all things resolve into power laws.
All people have essentially the same amount of computational power, some just use more of it for abstract rather than more “concrete” activities.
And here we see the first hints of major conceptual problems with the notion of market values and social structures. And this is a vast and complex topic, and we do not have time to explore it in depth, but some useful simplifications can be made.
Markets only value things that are scarce. As soon as automation makes something universally available, then it has no market value. The systemic response thus far has been to erect artificial barriers to abundance (IP laws and others), to maintain the existing market structure. To a degree I can see the need for that as part of a transition strategy to abundance based thinking, and it is not a stable long term strategy. Just as UBI is a useful transition strategy that can buy us the time necessary to do all the very complex work to create alternative risk mitigation systems to all of those currently embedded in the market system.

And the thing to get is, that as automation makes things abundant, those things lose monetary value. So viewed from a monetary perspective, they are not productive, but viewed from the perspective of what is available to individuals, they can be incredibly productive. One has to chose metrics very carefully; crossing boundaries leads to dangerous conclusions. Peter appears to have crossed such a boundary at this point in the discussion.

At 1:12:35 Peter says – If we have automation then we need UBI, but he doesn’t see automation. Those I have met who are easily capable of delivering such automation are not doing so precisely because we don’t have UBI, and they can see the social consequences of doing so without it. We have a classical conservative/liberal block. The conservative says, show me the system in practice and I will look at changing systems. The liberal says the social damage from such a practical change without systemic change at the same time is too great. Deadlock.
We must break that deadlock.
If someone like Peter cannot see that, we are all in deep trouble.

At around 1:20:00 I agree with Peter that we need individual responsibility, it is one of the necessary boundary conditions for survival (both individually and collectively – that is both social and ecological responsibilities). Where we disagree is that markets are capable of doing that in contexts of universal abundance of much at all.
I also agree that framing UBI as welfare is not useful. It isn’t welfare, it is a universal dividend of the sum of human achievement to date.
Which segues nicely into the next topic of “Solving the inequality problem”.
To me, this is a major “framing error”.
The problem is not inequality.
Inequality is an essential aspect of freedom, diversity and creativity.
The real problems are insufficiency and insecurity.
The marketing dogma is that demand is infinite. The psychological reality is quite different.
For most people, demand limits quite quickly, and most are very happy with quite reasonable limits of material goods and services.

I share Peter’s aversion to violence, and aspects of our reality can be very violent and we ignore them at our peril. So one must be willing to confront violence, and with sufficient channels of communication and sufficiently diverse networks of diverse agents, that is a reasonably tractable problem space.

Agree with Peter’s comment at 1:27:30 that too much central control is never healthy for intelligent people.

Where Peter and I seriously part company is 1:30:50 with his assertion that “it is hard to see how the process works without growth – the legislative process does not work”. In the sense of the existing processes, we can agree. But in the more abstract sense of systemic solutions to the space of such problems; there are many examples of things working. I spent about 14 years in one such collaborative consensus project here in Kaikoura, New Zealand (side note Peter is somewhat notorious here for having gained citizenship with only 12 days in the country – would be nice to see him here more often – to get a greater appreciation of the depths present in this culture).

At 1:34:10 we get to the heart of the Framing issue, and the reason for everything above.
What is human nature?
Yes we are deeply complex.
Yes we can be competitive or cooperative.
What we express is deeply dependent on context.
To a reasonable first order approximation it is true to say that we are fundamentally cooperative.
Placing people in an insecure competitive environment (markets in a context of exponential automation) does produce tendencies to violence and extreme competitive behaviour.
Conversely, delivering material and social security reduces the risk of violence and promotes cooperation and diversity. And it is extremely complex territory – many levels of complex adaptive systems that are not predictable even in theory; that demand an iterative approach.
It is only if the threats are external that we can support cooperation. No shortage of real external threats, we just need systems to ensure individual security and freedom. UBI isn’t any sort of final solution, and it is a useful transition strategy.

From about 1:46:45 the subject of preference falsification comes up.
To me it is a clear mis-framing.
It needs to be reframed in an evolutionary games theoretic context of hidden preference limits in a semi tournament species.
When that is done, the issue can be resolved.

Agree with Peter at 2:02:00 that people realise bad faith acting and grow out of it; and that is a deeply dimensional issue.
Bad faith acting can be a deeply recursive issue.
At what level is an unwillingness to seriously consider an alternative framing of an issue a matter of “bad faith”?
Where are the borders between “ignorance” (intentional or otherwise), “inability” (genetic or cultural – some form of faith capture), and “bad faith”?
Is “faith” an appropriate term, in an age of evolutionary epistemology and ontology?

I am committed to individual life and individual liberty, to the greatest degrees allowed by reality.
I acknowledge that there are many levels of fundamental uncertainty in reality, that are often masked by our overly simplified experiences of it.
I acknowledge that freedom demands responsibility if it is to be in the service of life (the value of life comes before the value of freedom). There are many more ways of dying than there are of living.

And this is profoundly deep systemic and strategic territory.

I can see no class of solutions that deliver reasonable probabilities of long term survival that are fundamentally founded in competitive markets.
There will always be a place for markets, and in the larger scheme of things, they need to be bit players to the levels of individual responsibility we must assume if we are to survive.
Jordan seems to me to have captured much of that in a way that communicates well to many people (even if he isn’t fully conscious of it himself). As he says, knowledge is often like that, we embody it before we become conscious of it.

Posted in economics, Ideas, Longevity, Our Future, Philosophy, Politics, Technology, understanding | Tagged , , , , , , , , , , , | Leave a comment

Quora – AI and work

Will artificial intelligence ever replace the need for humans to work?

In what sense?

Humans need to work in the sense of doing something meaningful with some significant fraction of their time.

Work in the sense of having to do something in order to gain sufficient resources for living – that is not necessary. The only reason we have that now is a sort of social and mental inertia. We already have the machinery and systems to allow us to deliver every person on the planet all the resources they need to do whatever they responsibly choose, with only about 2% of the population needing to put a significant portion of their time into making it happen. But few have seriously looked at the systems we have, and their consequences; and fewer still have seriously explored what responsibility looks like in such a context.

When most things were genuinely scarce, having things to exchange made sense. Now that most manufacturing is close to fully automated, it makes no sense at all, except in the sense of maintaining existing systems.

This is a seriously complex issue.

There are many things that our existing systems do that few people have even begun to consider. So it is not a simple matter of just stopping existing systems. It is much deeper, more complex, and more nuanced than that, and it is something that needs doing.

We need to change from scarcity based thinking to abundance based thinking, and we need to do so in a way that is survivable, and avoids all the many dangers present.

So the answer is both yes and no, in different senses.

Being human is in part making choices in life that have meaning for us as individuals. For most people throughout history that meaning has come from various sets of default stories delivered by culture – whether they be about service to some idea (belief, religion, ideal, culture, etc), to some group (family, tribe, company, club, town, nation, etc) or any combination thereof. It is possible for us each as individuals to make a conscious choice of purpose and meaning, and as yet very few are doing so at higher levels.

It can be difficult teaching people to rely on their own choices when most cultures and traditions have taught radical obedience to some set of standards (ideals, lore, laws, traditions, etc) without question. It can be a difficult and unsettling process reaching a personal balance between a deep respect for the lessons from our past, and an active exploration of the possibilities contained in potential futures. The world seems to be sufficiently complex that there is room for eternal uncertainty in that process. Not all possibilities are survivable, and with a reasonable degree of cooperation, care and support, most are. The likelihood of long term survival in competitive environments is very low. Both us, and the environment in which we exist are sufficiently complex and uncertain that the only approach with realistic long term survival probabilities are fundamentally cooperative (with some competitive aspects within those cooperative boundaries).

It can be difficult for individuals to realise that their experiential reality is a subconsciously generated model of whatever “reality” actually is. Most think experience is reality, and in the subjective sense it is; but not in the objective sense. Reality seems to be sufficiently complex that we must all make our models of it at some resolution which is necessarily much simpler that the complexity present. Appreciating that can be difficult – for all of us, every level, however many levels we may have achieved.

When dealing with complex systems, no set of rules can be appropriate in all cases. Complexity demands an iterative approach: probe, sense, respond, repeat – all levels.

All levels of structure require boundaries, and the more complex the structure, the more complex and responsive and adaptive those boundaries need to be.

Exploratory behaviour is essential to our survival, as is conservation of the lessons from the past – both are essential, neither can dominate the other.

Posted in Ideas, Longevity, Technology, understanding | Tagged , , , , , , , , | Leave a comment

Laurie – Shaw – Used Up

Thoroughly Used Up

“I am of the opinion that my life belongs to the community, and as long as I live, it is my privilege to do for it whatever I can. I want to be thoroughly used up when I die, for the harder I work, the more I live. Life is no ‘brief candle’ to me. It is a sort of splendid torch which I have got hold of for a moment, and I want to make it burn as brightly as possible before handing it on to the future generations.” G B Shaw
What do you want to hand to future generations?

I certainly agree that what we do needs to work for everyone.
I also agree that we need to keep active.

And it seems entirely possible to me that we can go on living indefinitely if we each act responsibly, respect diversity, respect the environment, respect the deep lessons from the past, and stay open to infinite creativity.

I want to give future generations my hand in friendship, personally (as distinct from leaving them anything).

Individual life;
Individual liberty;
Responsible action in social and ecological contexts.

Posted in Ideas, Laurie's blog, Longevity | Tagged , , | Leave a comment

Intelligence and morality

Facebook – FOLogic – Mikael Johnsson asked

How much can pure intelligence “alone” solve the problems of morality?

We are not pure anything.

We are evolved embodied entities.

Our conscious awareness is but a tiny part of the vast multi-leveled computational mass that is a human being.

We have many levels of systems, many levels of valences, all cooperating and competing in real time for phenotypic expression.

We are far more cooperative than we are competitive.

Our levels of intelligence are subject to override by older demands, like air, food, sleep.

Most of the patterns that govern most of our behaviour come from some combination of genetic and or cultural factors – operating at various levels.

Most of our morality seems to be a set of evolutionarily selected heuristics that allowed populations of individuals like ourselves to survive in the contexts of our deep past.

Our present context is changing exponentially in ways that fundamentally alter many of the dynamics that made those moral heuristics as stable and useful as they were.

Certainly there is a sense in which we can explore the games theoretic context of positive sum cooperative game spaces, to get some insight into the sorts of heuristics that offer the greatest probability of long term survival with maximal degrees of freedom; and what we end up actually doing in any particular context is highly unlikely to be that!

[followed by]

Hi Pawel,

In the same sense that a computer is a system, or the internet is a system, or an ecosystem is a system, or New Zealand is a system, or the solar system is a system; then yes, we are a system, that has boundary conditions.

Our consciousness emerges from a lot of lower level systems that need to be in place.
That conscious does not “control” those lower level systems, though it can influence them.

The idea that our consciousness is a singular thing is overly simplistic.

I’m not sure how much meditation you have done.
If you are practiced in meditation, then the idea that we control our thoughts should be clearly visible to you as a nonsense. Thoughts arise within us.
We then have some conscious level influence on what we do with the thoughts available.
One can learn to recurs this through a few levels, and there is some power in those practices and competencies, and they are still just variations on a theme from a systems perspective.

The greater the awareness we can build of all of those systems within us, and the classes of responses they tend to have to different classes of context, the greater the degree of influence we can create in our conscious existence in reality.

And that is a highly dimensional complex structure, and the degrees of complexity present demand that we use simplifying heuristics, and those heuristics will necessarily have failure modalities possible in some sets of contexts. That is what consciousness must necessarily be like – be it human or AGI – the computational spaces and uncertainties are infinite – and demand degrees of approximation – and those approximations necessary lead to uncertainty in all levels of results. That applies to morality as much as anything else – at the level of the specific.

[followed by]

Hi Pawel,

Certainly we are much more complex than most of today’s computers.

Certainly we need to understand the many levels of systems that make us what we are, at least to the best of our limited abilities.

I completed my undergrad studies in biochemistry 45 years ago.

I have maintained an interest in all aspects of the life and intelligence, the systems, the chemistry, the physics, the many classes of complexity. So I have some beginnings of an objective understanding of the complexity of the systems present. I made a conscious decision over 50 years ago to be a generalist, to get as much practical and theoretical knowledge from as many contexts as possible, in an attempt to help safely navigate the exponential changes that rapidly approach.

I would phrase it the other way.

We achieve degrees of autonomy to the degree that we become aware of the many levels of systems and influences within us. To the degree we remain ignorant of them, then to that degree we are at their mercy.

And it is all matters of degree and influence.

We need those subsystems to make us what we are.

And we are capable of eternally becoming, of transcending our prior limits; but only by understanding the nature of those systems and limits.

[followed by]

Hi Pawel,

There is no really simple definition of I and we.

We are not simple entities.
The thing I call me – “I am” is a cooperating colony of cells made up of about 10,000 times as many cells as there are people on this planet. Cells come and go, but the colony remains (at least for now).
Some of those cells are organised into groups that communicate with each other electrically. All cells communicate chemically. My self awareness, and my ability to have memory, and to plan and do things in reality, seems to be an emergent property of that very complex set of systems.

I can lose some of those cells, without the loss being too noticeable to the functioning of the system as a whole, but past a certain point the loss of function becomes noticeable.

Certainly, we have some aspects of our being that map fairly well to cybernetic descriptions.
We have other aspects that map well to social relationships.
We have other aspects that involve levels of abstract communication across time and space (like me reading Einstein or Godel – both of whom were dead when I read their works).
We have other aspects that have been selected and conditioned over deep time.

And certainly, there are relationships.
The really interesting thing to me is what seems to be the context sensitive nature of the strength of those relationships.

If you can think of a class of mathematical functions, you can probably find an instance of it somewhere in the functioning of a human being. We do in fact seem to be that complex.

[followed by To Erik – asked about mean time between failure]

Hi Erik,
Could you elaborate a little on exactly what dimension or range of dimensions you are referring to with that comment – I can’t localise to one, and several are interesting.

[followed by Erik introduced “infinite paralel redundancy”]

Sort of.
And more complex than that -Hayflick limit and all that stuff (as counter cancer strategy).
And it does seem probable to me that we will achieve indefinite life extension, significant progress here –

[followed by Erik introduce enventual failure – running out of universe]

Hi Erik,

I don’t think we disagree about much.

I tend to look at all things probabilistically.
I attempt to do what I reasonably can to put myself on the longevity end of the tails of distributions.
And it is a highly dimensional set of “spaces”.

There can be no such thing as absolute certainty, only probabilities.

It is a little over 9 years since I listened to an oncologist tell me “You could be dead in 6 weeks, you have a 50% chance of living 5 months, and a 2% chance of living 2 years; go home and get your affairs in order.” That conversation was not on my life plan.
I am strict vegan. I have a minimum of 10g of vitamin C every day, in at least 2 doses of at least 5g (usually about 7g twice daily dissolved in a glass of warm water). I supplement with multi mineral and vitamins. I ensure I have enough omega 3s and plenty of B12. I do what I can to assist the transition from scarcity based thinking to abundance based thinking, at every levels of society.

Right now, running out of this universe is a long way off, so not a problem I have devoted significant resources to.
Indefinite life extension – that is way up the priorities list.
Risk mitigation strategies – way up there too.
We have to get self replicating technology off this planet asap. We need serious engineering capacity in space for a host of very good reasons that would put most people into a state of permanent anxiety if I was explicit.
This universe is a very dangerous place looked at on a long enough time span.

So I do what I reasonably can when and where I can.
Develop competencies and networks.
Await situations where minimum effort can deliver maximum result.
So many dimensions of risk. Far more than most are aware of (more than most have ever counted to).

Posted in Ideas, understanding | Tagged , , , | Leave a comment

Evonomics – Cost of Climate Change

The Cost of Climate Change

The criticism is valid, but misses the much more important factor.

Computation is doubling in less than a year.
Automation is infiltrating ever greater levels of complexity.

Once fully automated systems come on stream, the scarcity based thinking (markets, exchanges) become not simply redundant, but misleading.

We need to think in terms of fully automated systems and abundance.
Current CO2 levels are forcing the system by 2W per m^2. That is less than 0.2% of incoming solar energy.
Lauching mass from automated systems on the moon to create mirrors at L1 that allowed us to modify the incoming solar energy by just 1% would allow us to counter all current global warming.

We need it.
We need it soon.
We need it for all sorts of other reasons that most people don’t want to think about because thinking about such risks causes them to go into anxiety attacks.

We have the ability to produce a world where everyone can experience security and freedom far beyond what most experience today.
And such freedom comes with responsibilities, so it is not freedom from all constraint (that is extinction).

The cost of such security and freedom, is giving up using markets as a dominant measure of value.

As a transition strategy, a Universal Basic Income (a relatively high one), will allow us time to make the necessary changes.

On current trends, this technology can be available by the mid 2030s, if we make it a priority.

If we don’t, then we are not looking at sea level rises of a few cms, but rather of 10s of meters.
Loss of ports and coastal cities. Temperature is rising exponentially, as positive feedbacks kick in.
The problem is perfectly solvable, but it has to be acknowledged before it can be solved.

This is a problem of far greater magnitude, at the same time as it is an opportunity to create something that has never before existed – a truly just and stable society, that has as its highest values individual life and individual liberty – both of which demand responsible actions in social and ecological contexts.

[followed by in reply to Fausten]

I know climate change is real.

I planted 16,000 trees on 35 acres 24 years ago to offset my carbon emissions.
That might work for me, but not enough land for it to work for everyone.
We need a realistic answer.
Exponential technology actually gives us a realistic way to mitigate the real problem.
It does require that investment be made to create the technology.
I’m not expecting people to change overnight.
I am expecting exponential technology to be able to do big things in a short time once the doubling time is down to 2 weeks.
That technology won’t make itself (at least not until the first one is built).
It needs real resources and real effort to create it.

[followed by in reply to Fausten]

Not 2030, initial tech available mid 2030s, solutions on the ground around 2040.

What I am proposing doesn’t rely on singularity, and nor does it exclude it.

Yes, there is something pragmatic we can do now.
We can all minimise our use of resources (to the extent we can reasonably can given our individual differences and levels of awareness), and we can cooperate to deliver global solutions.

Both are possible.

Both are necessary.

I am doing both, at many different levels simultaneously.

The thing about creating new stuff is that most people don’t believe it until they see it. Even when they can see it, many still don’t believe it.
We are very strange entities.

[followed by – in reply to Postkey – emissions till going up]

Yes – all true.
And this is a very complex issue.
Over simplifying it doesn’t help anyone.
How little are you prepared to settle for.

Are you prepared to live only within the area you can walk or cycle?

Are you prepared to live without air travel, without international trade?

Not many are.

There are many ways of thinking about and calculating energy equivalents, but if we take some of the more conservative, then a litre of gas can do the same amount of physical work that a man can in two days. How many people can you employ for $1/day?

Our standard of living is in part based upon the easy availability to our current technologies of the energy content of fossil fuels (coal and oil in particular), and in part on our technology and understanding (the ways in which we structure and organise things, at any and all levels).

We have known for many years how to harness alternative forms of energy (like solar), but solar is distributed, and fossil fuels are contained and relatively easily controlled and monopolised for the extraction of profit. So there is no economic incentive to move from fossil to solar – it is a loss of profit, a loss of scarcity – a move to abundance (and abundance has no economic value if it is distributed, only if it can be contained and monopolised).

Our brains are formed from linear comparitors. We tend to look for and find linear relationships.

It is very difficult for people to easily recognise exponential relationships. For the first few terms a linear and an exponential are very similar.

Our use of solar energy has been on an exponential for over 40 years. On current trends it will meet existing electricity demand in 16 years.
It could do so much faster, but the oil industry has consistently blocked attempts to do so – because of profit incentives (there being no profit in the universal abundance of anything).

So yes – our use of fossil fuels has gone up, and that says more about the way in which we currently value things (by measuring value in exchange) rather than saying anything about our potential to do things differently.

And that is a very different and very complex set of systems, and this is a site focused on economics (which is about trying to come up with useful heuristics for understanding the complexity embodied in our existing social systems).

The technical challenges to solving climate change are trivially simple compared to changing the way people view their relationship to their ideas of value and the more abstract concepts of valence more generally (an indefinitely recursive abstract idea applicable from subatomic realms to the highest levels of abstraction).

I have stopped flying aircraft for fun, and try and minimise my use of motorvehicles, but I still see many of our economic institutions being about sales, about appealing to “status”, about being bigger and faster and a more conspicuous display of resource use (which was over deep evolutionary time a useful heuristic for ability and therefore a useful metric for judging selective advantage in social contexts – mate selection, social leadership etc).

Most of our production is about profit, not about the most ecologically and socially beneficial use of resources. So much deliberate misinformation, at many different levels, all in the service of profit.

So there are issues deeply embodied in the structure of our brains and our tendencies to types of neural activity and levels of valence that are not a great fit with our current addiction to markets and capitalism and the need for long term survival.
Greater awareness is required.

It is coming, slowly.

Exponential technologies offer us possible solutions if (and only if) we use them wisely (all levels).

Economists are used to thinking about exponentials in terms of doubling every 30 years or so.
Solar PV is currently doubling every 30 months or there abouts.
As a marine ecologist by training, I know that some species of phytoplankton can double about every 3 hours.
If we can achieve fully automated manufacturing doubling every 14 days, that seems entirely achievable; and transformative. That equates to an increase in real wealth (benefits to individuals and societies) as opposed to market measure of exchange value, of a million fold per year (until saturation is reached – all real situations have limits). Hence to avoid the heat saturation effects on the planetary ecosystem of the latter stages of such a reproduction rate, we do most of the primary production off planet.

So the problem is easily solved once we stop thinking about it in terms of scarcity based value measures, and instead look in terms of abundance based values (what are the things we really need to have in abundance for everyone, and how can we most effectively achieve those, and what sort of limits really are present if we want to keep ecosystems in existence).

And this is a highly dimensional problems space. I have explored at least 10 more levels than I have explicitly mentioned here. There appears to be no logical limit to the number of levels or dimensions possible (each of them potentially infinite).

Any tool is morally neutral – it is always what we do with them that counts.

[followed by – in different subthread]

Hi Paul,

I agree with you that anything to do with complexity involves uncertainty, the more complex the greater the uncertainty in a sense, and we know of nothing more complex in this universe at present than the human brain.

And I am a reasonably competent sailor and pilot, and even in my 200 hours of flying gliders, I always managed to get where I was going, and never had to “land out”. I never went to sea and didn’t make it back to port, even through storms of over 80 knots of wind, and towering waves. No way could I have predicted the exact path I would have taken, and I did manage to predict the outcome through an iterative engagement with the system of invisible air currents and various visible features in each case. Planning for the future with confidence is like that. One gains experience and competencies, then one enters the dance with reality, and one gets through the dance in ways one couldn’t possibly have predicted in detail, yet can give the general outline of the sets of strategies and competencies used.

Developing technology is like that.
I spent 17 years on fishing boats, mostly boats I owned, often making them do innovative stuff on very tight budgets (so a lot of practical engineering experience).
I started programming computers 46 years ago, in the days of paper tape and punch cards, and formed my own software company 33 years ago, which I still spend a few hours a week running.
I have a lot of computers in this house, laptops, desktops, raspberryPi-s, arduinos, a parralella, and a dozen or so sundry other odd processors in various boards and configurations (quite a few of them employed in recording endangered species in this region — getting actual data to allow us to formulate realistic ways of preventing their extinction – I chair the Hutton’s Shearwater Charitable Trust, but work in many other capacities locally and regionally).

So yes, uncertainty about details, and confidence about the ability to produce real outcomes given enough commitment and determination (and I think the dietary regime I adopted 9 years ago to beat a terminal cancer diagnosis demonstrates the level of commitment and determination I can generate and maintain) – both are real.

My house is 100m above sea level, and a 10 minute walk to the ocean – I chose it in full knowledge of probability spectra 23 years ago (I calculated maximum likely sea level at 80m above current, if everything goes wrong). And we had a 7.8 earthquake here in 2016, that tested many of my systems, and they all past with high grades.

There is certainly a massive engineering cost to building the first set of self replicating machines, probably of the same order of magnitude as building a Nimitz class aircraft carrier to build one set of machines with a total mass of about 2 tons. But that cost is incurred only once. Build 1, test and debug it down here for a year or so until we have a reasonable population, then ship one to the moon and get it replicating as fast as it can. Takes about 2 years to cover the surface of the moon with solar cells. Then you can use solar powered linear motors to accelerate moon mass back into earth orbit or elsewhere (no problematic atmosphere to worry about). Can do all the obvious tech on the far side of the moon, so little would be obvious to earth observers. The change in lunar albedo would not be obvious to most people. We end up with the ability to do serious engineering in space under remote control – all possible within 5 years of producing the first fully debugged unit.

And doing anything like this demands a level of global cooperation, and such cooperation requires a level of external existential threat to get established, and we have such conditions right now. So all entirely possible from a complexity and games theoretic set of perspectives.

We can then put mirror systems into L1 and manage sunlight (managing within a 1% range is sufficient to hold climate stable and prevent sea level change or ice age – not the sort of thing most people would notice), and prefabricate a global high speed tube train system in orbit, and bring it down for assembly eliminates the need for air travel. Such things can be planned and achieved. And sure, it is impossible to work out every step of the process in detail before hand, and doing such stuff via an iterative process is what geeks like me and many others I know love to do. It is intellectually challenging and stimulating.

Lots of stimulating and interesting stuff then becomes possible, like building really large telescopes, building really large habitats in space, doing experiments in them that are too dangerous to do here on earth, etc.

Yes there are unknowns, and it is entirely achievable, and does solve most of the existential risk issues that we have right now, that cannot be mitigated in any other realistic fashion. If Elon has proved anything, he has proven that is possible !

We have a window of opportunity, but it is not a large one.

Given the amount of weaponry already developed (particularly biotech and nanotech), survival of global scale conflict is not high probability for anyone. Global famine is similarly not survivable.
Climate change is real, and a small scale risk in the big picture analysis, but it is a risk that can go public without inducing immediate catatonic response in a large fraction of the population, so is a good one to focus on at present – a useful catalyst in a very real sense.

I know people exist who could make this happen, who would do so for relatively little up front cost, but with the assurance they would each end up with a few trillion tons of moon mass in earth geostationary orbit at the end of the project to do with whatever they responsibly chose (tiny compared to what governments would control). With a guarantee that any government could observe everything they were doing, and with very high guarantees of individual freedom (coming from the high guarantees of individual responsibility on their part).

All doable, and not predictable in detail.

Posted in economics, Nature, understanding | Tagged , , , , , , | Leave a comment