Laurie – Labyrinth Walking

Laurie’s Blog – Labyrinth Walking

Laurie asked: What question or prayer would you contemplate on a labyrinth walk?

For me, the question would be the same one I have been asking since 1974:
What sort of social, political and technical institutions do we need to provide an environment that delivers minimum risk and maximum freedom?

How do we organise ourselves such that given the ability to live on indefinitely, we actually have a reasonable probability of doing so, and doing so with what we would consider reasonable degrees of freedom?

Those questions have dominated my thinking since completing my undergraduate training in biochemistry in 1974, and realising that from a cells perspective, each cell can be thought of as the original cell (each division giving 2 essentially the same). Thus the default mode of life for cells must be indefinite, even if most cells that have ever existed have died for some reason – every cell alive today is part of an unbroken chain of life some 4 billion years old.

Once we understand that in sufficient detail, then indefinite life extension for us must be an option.

In a very real sense, it is essential that we achieve such life extension if we want life like us to persist at all.

It needs to be in our own personal self interest to look after the long term future of our environment.

Our present systems are based in scarcity, in exploitation of the natural world, and have a very short term view of the future (quarterly profit statements trump long term ecological impacts).
Without significant change, quite quickly, we have a very low probability of survival long term. We probably have less than 20 years to achieve such change.

Fortunately, such change does actually seem both possible and probable to me.

There doesn’t appear to be any single “right way” of going forward, but there do seem to be many ways that have quite short futures.

The idea that evolution and social organisation is all about competition is one of those.
The idea that competition is at the heart of evolution is one of the greatest mistakes of the history of scientific thought, and it is understandable in a sense.

It is far more accurate to say that the evolution of complexity is predicated on the emergence and stabilisation of new levels of cooperation; but coming to that awareness requires a deep understanding of the complexity and the strategies of evolution; and all our ancestors had were the simple beginnings of such an understanding.

Thus it is far more accurate to say that being human is all about being cooperative, even though we can all compete if the situation demands it.

It is our ability to cooperate at whole new levels that makes all of our social and technical structures possible.

Our future is predicated on us understanding this, and adopting a new level of cooperation.

There is no stable way to continue our current competitive market based systems. If we try, our extinction is guaranteed.

The difficult part of the emergence of new levels cooperation is always finding sufficient mechanisms to prevent them being invaded and overrun by cheating strategies.
In our current social systems it is possible to characterise most of our banking, finance, political and legal systems as cheating strategies on the cooperative that is human society. And it is important to get that it is most, and not all. Each does contribute important and vital aspects, and each is overloaded with stuff that is not simply unneccessary but dangerous.

So my walk, my contemplation, would be around finding effective ways to transition from competition to cooperation, from market slavery to cooperative freedom, from tribalism to membership of the cooperative of sapient awareness. How to use the ability of automated systems to deliver all the essentials of a comfortable life to every individual, at the same time as we ensure that all individuals are aware of and responsive to their responsibilities in both social and ecological contexts.
Complex like us can only survive in cooperative contexts.
If we allow competitive environments to dominate, then the complexity of our technology and social systems will be destroyed, that is a mathematical inevitability.

There must always be a tension between freedom and responsibility, between creative change and the maintenance of the order of the past. That is an eternal aspect of reality, and the more we can each internalise and acknowledge that, and create both respect for individuals differences and tollerance of diversity, the greater our probability of survival.

How to create that message in a way that it spreads, and makes a real difference in reality?

Posted in Ideas, Laurie's blog, Our Future, understanding | Tagged , , , , , , , , , | Leave a comment

Quora – creating AI

Is it possible for an intelligent being (such as a human) to create something more intelligent than the most intelligent human?

As many others have stated, it depends on how one defines both “create” and “intelligence”.

Evolution as a process seems to be something of a constrained random search algorithm for the exploration of infinite possibility. It seems capable of instantiating new levels of constraints which increase search performance in new domains.

Thus we seem to be fundamentally cooperative entities (capable of many operating modalities covering many spectra including the competitive to cooperative spectrum) the result of some 15 plus levels of cooperative systems.

We didn’t “create” ourselves.

It seems more accurate to say that we are participants in a process that is far more complex than we are capable of comprehending in detail, and our contributions to that process (creative and otherwise) need to be seen in context.

To say any individual is solely responsible for creative acts somewhat overstates the case.

Similarly to say that we are all just the result of inevitable causal rules seems to understate our degrees of freedom and responsibility.

We each seem to be influences in a process bigger than we can comprehend.

Does that remove either our creativity or our responsibility?


Both seem to remain.

So yes – we can be significant players in the process of the emergence of greater intelligence than us, and that is a really complex and expanding set of domains of inquiry that have fascinated me for over 50 years. The more I know, the more I know I don’t know, and the greater the role of uncertainty in all decisions.

To me it seems that both of the simple answers to the question (yes and no) contain elements of hubris we have no right to assert.

The real answer seems to be far more subtle and nuanced, with deep elements of both creativity and responsibility.

To begin to glimpse something of the complexity one must be willing to go beyond the security and comfort of simple binaries like (yes/no, true/false, right/wrong) and step into a world of profound uncertainty of potentially infinite dimensions.

Strangely, there is a certain sort of security available when one does that, and it is one that demands a respect for all other complex entities, human and non-human, biological and non-biological. A respect for both life and liberty, and a demand for responsibility in both social and ecological contexts.

Posted in Ideas, understanding | Tagged , , , , , , , , , | Leave a comment

Quora – humans as machines

Quora – Is it fair to say that humans are nothing more than extremely complex machines?

It very much depends what you mean by machine.

It seems beyond any shadow of reasonable doubt that we are complex entities made of molecules and information.

What we normally think of as machines are constructs made of molecules and information.

We are very much more complex than any machine we have yet made, and that will change in the not too distant future if exponential trends in information, information processing technology, algorithms and manufacturing technology continue as they have been for the last 150 years.

The “nothing more” aspect of the question bothers me.

What does it imply?

Does it imply that complexity is not deserving of respect?

For me, any entity capable of naming itself in language, of having a model of itself and others within a model of reality in its awareness, is worthy of respect and freedom and whatever it reasonably needs to survive.

If it is responsible and cooperative then it deserves freedom commensurate with the levels of responsibility it exhibits in reality.

The emergence of complexity is predicated on cooperation in an evolutionary context.
Evolving systems constrained by competitive pressures stay simple. Complexity can only emerge when conditions allow for the emergence of new levels of cooperation.
The sorts of entities we are seem to be the result of some 15 or more levels of cooperative systems emergent. We are extremely complex.

We are in the early phases of really starting to get a serious understanding of just how complex we really are (it has been an interest of mine for over 50 years). Understanding such complexity isn’t easy. Thousands of complex adaptive systems at each level, at minimum 15 levels of systems. That is serious levels of complexity to start to get to grips with.
It takes a lot of years.

So yes – in a sense, we are machines of matter and information; and the many levels of self referential loops in those systems give us degrees of independence and self determination that seem to be potentially infinitely extensible – and we have plenty of older and much more fixed systems that will take control of our actions if their contexts are triggered before awareness notices (hunger, anger etc).

And the “nothing more” part isn’t appropriate.

Anything at our level of complexity is deserving of respect, and of the offer of cooperation.

Posted in economics, Ideas, understanding | Tagged , , , , , , , | Leave a comment

Quora – AI and risk

How do we see the relationship between intelligence explosion and existential risk from artificial general intelligence?

Who is the we you refer to?

As I see it, as someone interested in this suite of subjects for over 50 years, there is real risk in doing autonomous Artificial General Intelligence badly, and many people are becoming aware of those risks, and much good work is being done in many places to identify and mitigate the major risk factors. Great work at Miri ( (, Oxford (Oxford Martin School | University of Oxford (, Cambridge (Centre for the Study of Existential Risk ( and many other places (not least of all Ray Kurzweil’s group at Google, and Ben Goertzel’s many projects, and Max Tegmark’s many efforts).

And to me the existential risk from doing AGI badly pails into insignificance when we consider the risks of not doing AGI at all.
As to what intelligence is, that is a very complex question.

It seems clear that the sort of intelligence that we are requires at least 15 levels of highly evolved complex systems for it to emerge (not 15 systems, but 15 levels of systems, each level with thousands of individual systems in it, interacting). We are really complex.

We live in very complex environments, containing many different sorts of risks.

Mitigating those risks requires continuous exploration of new systemic territories (eternally).

So our long term security relies very much on us developing systems that are better at it than we are.

So yes – there is risk in AI, some of which is quite well characterised, some is sketched out, and some will always remain unknown.

And the known risks we face are such that we need AI to solve them.
So no future is free of risk; and to me it seems clear that a future with well crafted AI is far more secure than one without it.

And that is predicated on most people realizing that the evolution of complexity is predicated on cooperation.

Competitive systems are not supportive of freedom and creativity; the drive systems to some set of minima on the complexity landscape.
Our individual survival and freedom is fundamentally predicated on cooperative behaviour (a non-naive cooperation, that is eternally exploring ways to detect and counter cheating on the cooperative).

In what will seem paradoxical to many, our individual freedom is actually predicated on and enhanced by acknowledging our social and ecological responsibilities as cooperative members of society. All forms require boundaries for existence. We are the most complex forms we know of at present. Some of the boundaries required for our existence are in the form of moral behaviours.

Posted in Longevity, Our Future, Philosophy, Technology, understanding | Tagged , , , , , , | Leave a comment

Quora – life extension

Quora – According to researcher Aubrey de Grey the world’s first human to live to 1,000-year-old might already be alive. Do you agree?

To me it seems beyond any shadow of reasonable doubt to be true.

When you look across all domains of knowledge, at the exponential expansion of computational ability, at the expansion of the domain of AI algorithms, and of the space of algorithms more generally, then it is clear that we are moving toward a level of understanding of biology that will allow us to understand the mechanisms that give rise to the expression of aging that we recognize.

When you consider life from a cellular perspective, every cell alive now is part of a continuum of cellular life some 4 billion years old. The default mode for cells must be indefinite life. Sure most cells have died, and by definition they are not the ones alive now.

So the aging that complex organisms display must be a secondary set of characteristics (from a genetic perspective).

Once we understand that sufficiently, we must be able to alter it.

And life is really complex.
Far more complex than most people have ever conceived of complexity being possible.

So the task is not simple, and modern computers and AI systems are not simple.

So the task does seem to be achievable, and it is an extremely complex task, one that has been of interest to me since completing third year biochem at university in 1974.

Also of interest to me is the wider strategic (systemic) environment in which complex life exists – the sorts of boundary conditions required to sustain complexity, and the sorts of balances required to have both security and freedom available universally.

The answers to that are now clearly emerging, that only systems founded in cooperation have any significant probability of living a very long time with reasonable degrees of freedom and creativity. Competitive markets are not in that class of systems. So we need to transition, and some sort of universal basic income seems to be a very useful sort of transition strategy, as we move from scarcity based thinking (markets) to abundance based thinking (distributed, automated abundance).

And there must always be tensions; must always be negotiation between diverse systems of thought and action.

Nothing simple, and it does, beyond any shadow of reasonable doubt in my mind, seem to be possible (though not in a business as usual sense).

Posted in Longevity | Tagged | Leave a comment

Laurie – What are you reaching for?

What Are You Reaching For?

Hi Laurie,

I’m reaching for a world where every individual has all the resources they need to do whatever they responsibly choose, and everyone has some level of awareness of their responsibilities to maintain social and ecological systems.

A world where the highest values are individual life and individual liberty, and liberty exists in a context of acknowledging that we all have social and ecological responsibilities.

A world where all essentials are available to all, mostly via fully automated systems.

Posted in Laurie's blog | Tagged , , , | Leave a comment

Basic Income

href=”″>Foundations of Logic – Basic Income BI/UBI (Universal BI)

First, a BI is a matter of social justice. The wealth and income of all of us has far more to do with the efforts and achievements of our collective forebears than with anything we do for ourselves. If we accept private inheritance, we should accept social inheritance, regarding a BI as a “social dividend” on our collective wealth. …

Dirk responded
Why the hell should an individual take any effort without having the chance to gain an individual reward? … but it becomes extremely senseless whenever responsibility is crowded out…

Agree in a sense.

It only works when responsibility is included.

That has to be part of the package, or it will fail.

And competitive markets doom us to extinction in a world of exponentially increasing computation.

Change is happening, far faster than most can comprehend.

[followed by Dirk responded …What precisely will be the possible outcome of this change?
Hadn’t there always been change?]

Hi Dirk,

This is really complex.

So many different levels to it.

At the broadest levels, yes – certainly there has always been change, and some of the change had very low survivability (eg Chicxulub event, Toba super-volcano explosion, etc).

What has made our economic system survivable is the fact that the vast majority of individuals could contribute value to others through the employment of part of their time to provide solutions to problems of interest to others. Competitive markets have had many useful roles in that problem space and have evolved some very complex and abstract systems that are currently very important, including in network formation and maintenance, distributed governance and decision making, distributed risk management, etc.

The exponential increase in the efficiency of computation and automation is rapidly changing that.

For many people, automated systems can already produce goods and services at lower energy and production lifecycle costs than a human being.

Any task can now be automated, and there is still currently a large set of tasks that humans can do more energetically efficiently than machines. But that set is reducing, and will essential be an empty set by around 2033 (on current exponential trends).

That fundamentally changes the strategic and power dynamics of the systems present. It entails existential level risk for all.

The automated systems are necessary to solve other classes of existential risk already well identified.

So if competitive systems dominate in that environment, a very few very competitive entities will have a lot of freedom for a short time, and most will be forced to very low levels of freedom, and the entire system rapidly becomes brittle and breaks (all levels).

Step out a level of abstraction:

All systems require boundaries.

Without boundaries, form cannot exist.

So freedom cannot be an absence of boundaries, that is extinction. (If one fails to acknowledge the existence of gravity and walks off a cliff without a parachute or similar, then yes one can do it, but not and survive for very long – abstract to all levels of complexity and systemic boundary conditions).

If freedom has real meaning, then it must involve some approximation to identifying the boundaries that are required for the maintenance of all the forms that are present and valued, and must be open to the emergence of new levels of complexity, with new requirements for boundaries, which may impact the boundary requirements of lower level forms.

This process seems to be capable of infinite recursion.

So yes – change will always be present, that is a given.

And the notion that competition enhances creativity has been disproved in multiple repeated experiments.

It works only in the most simple of environments.

As soon as complexity is present, then cooperative environments out perform competitive ones by a substantial margin.

We are in profound and exponentially increasing complexity.

Our evolved tendency to simply in order to act, which increases the degree of simplification with increasing stress, leads to systemic failure if taken too far (and we are going too far).

If we wish to survive (any of us) then our systems must acknowledge that all individuals have value. UBI (Universal BI) is a useful approximation to that as a transition strategy from scarcity to abundance based value sets, but is not a stable long term solution to the problem.

Posted in economics, Longevity, Our Future, understanding | Tagged , , , , , , | Leave a comment