Over-population?

Longevity will lead to Overpopulation – we need to consider our options now

Can only agree with the two previous commenters, that the prime assumptions in this article are clearly false.

Population increase is not a given.
Most people like sex, and we have contraceptives, so we can enjoy sex without babies.
Most people like the look of babies, but not many people actually like to devote the time and attention and resources required to bring a child to adulthood (mostly time and energy). So reproduction rates tend to drop as real freedom increases (freedom with accompanying resources to empower choices).

Scarcity was a significant factor in history, so many of our systems have evolved to deal with that reality (most significantly market based capitalism – market value is totally scarcity based, and our political structures).

Automation allows us to deliver abundance of a large and growing set of goods and services. This abundance could be available to every person on the planet, except that doing so would break the market based system so many people are used to – as the value of any universal abundance must drop to zero (just like oxygen in the air – very important, very abundant, zero market value).

So the big question of our age, is exactly how are we going to transition away from scarcity based thinking (money, markets and capital) and deliver the universal abundance that automated technology makes possible.

One part of that seems to be getting people to look at evolution from a systems perspective.

When one looks at the levels of complexity present in living systems over time, one can clearly see an exponential trend in the development of complexity that is linked to the emergence of new levels of cooperation.

As Axlerod showed, to be stable raw cooperation requires attendant strategies to prevent cheating strategies from overrunning the system. There appear to be an infinite set of classes of such strategies.

The classic view of evolution is one of competition only.
That is clearly an inadequate understanding.
Cooperation has been taking an ever more dominant role over time.

It now seems clear that the next step in evolution is for cooperation to completely dominate competition; with a resulting exponential increase in diversity at all levels. Universal cooperation of sapient entities, at the level of respecting the life and liberty of every individual, and delivering systems that ensure the survival and self actualising needs of every individual are met.

From this perspective one thing is abundantly clear – all market based systems, and all centrally based systems of control, have reached the end of their social utility.

Long term security and prosperity now demand of us that we develop new levels of systems that are based in abundance and cooperation, empowered by technology.

The likely outcome of this would seem to be a population of individuals scattered on every set of spectra one can imagine, including but not limited to: biological to silicon based sapience, biological to mechanical bodies, …..

We do have some genuine limits to deal with.

The energy balance of this planet.
The nutrient flows of this planet (particularly phosphorous), we need to get much smarter at recycling at every level.

And it seems that these are relatively simple engineering challenges in a sense, as is every aspect of global climate change.

Once we have a set of machines that can be fully automated in the production and maintenance of themselves (and under human control – not AI or self aware, just programmed systems), then we can do all our serious engineering projects off planet, using mass from the far side of the moon in the first instance.

We are not short of energy – the Sun is a huge source of energy, and already our solar cell technology is reasonably efficient at turning sunlight into electricity.

Our biggest threat is now clearly our unexamined assumptions about the cultural systems we find ourselves in, money and markets foremost amongst them.

The whole concept of economy is based upon central control (control of the household, scaled up).

Rather than control, we need now to move beyond economics, into finding effective strategies to allow widely divergent paradigms and technologies to co-exist, with as little interference with each other as possible.
Many promising approaches out there.
Elinor Ostrom’s work seems particularly interesting in this regard, if one steps it up a level.
Many practical examples of this in action, which can be scaled up as required.

[followed by]

Hi Dobermanmac
Nice to start my 60th birthday by being transported 9,000 miles from my home in Kaikoura, New Zealand to New Jersey 😉

Agree with you that exponentials will play an increasing role in the reality of being human (in multiple domains, computation, technology, time, space, possibilities, strategies, relationships, levels of awareness).

Agree that the capacity of the human brain is potentially unlimited, and that the greatest limits are those imposed by the unexamined assumptions of our culture. I love the quote from Mark Twain “What gets us into trouble is not what we don’t know, it’s what we know for sure that just ain’t so.” So many levels on which that is operant right now. Most people have yet to start even taking baby steps into seriously investigating ontology and epistemology.

Rand was partly correct. Man’s capacity to think gives us potentially unlimited technology, and unlimited modes of valuation, and unlimited levels of awareness. However, any particular individual will be at some specific location on the infinite topology of the intersection of all distinctions present.

The idea of economy comes from Greek oikos, to manage the household.
There are two major senses of that.
In the classical hierarchical sense, there is a leader, and everyone else does what they are told.
Another possibility is everyone being aware of what needs doing, and reaching agreement by consensus as to who will do what.
Consensus decision making is extremely powerful if one has lots of time to play with. If one is dealing with strongly time bound constraints, the most effective strategy is to have a single leader who commands where necessary. Having had 17 years at sea, mostly skippering, the best crews knew exactly what was needed and did it, without any commands from me. It was only in the most dire of circumstances that every command needed to be obeyed instantly and without question. And there was always time for review of such performance once we were back in sheltered waters and out of the storm.

I am very supportive of the latter style of leadership, where all individuals consciously consent to being led, and the leader works at developing the competencies of every individual in the areas that most interest them, and delegating leadership whenever and where-ever possible.

So if one takes this widest of possible views of the term economy, then yes we must have such a thing. But if one uses it in the most commonly understood sense of market based exchange values, then clearly, logically, it is rapidly reaching the end of its social utility, and is currently causing at least as many problems as it solves. Market based systems seem clearly to this observer to be shifting from the space of being solution-multipliers to the space of being problem-multipliers (creating more problems than they solve).

As to the notion of singularity, many common aspects of that appears to actually be based in linear thinking.
When one starts to look at the space of all possible problems, yes, certainly there are some problems that scale linearly with computing power, and increasing computing power does allow solutions to that class of problems to expand exponentially. And automation without AGI allows for very much the same outcome.

When one looks at the more interesting classes of problems, that don’t scale linearly, one sees something quite different.
Some problems scale as simple exponentials. Even some of the simpler solutions to the equations of QM scale at the 7th power. So AI isn’t going to make significantly more progress than us on that front.

Then there is the whole issue of the class of possible truth values.
Most people only think in terms of the simplest of possible binary classes – true and false. Rachel Garden did an interesting paper demonstrating how classical and QM logic can be reconciled by allowing a tri-state system, of true, false and unknown. Other possible domains exist where all truth values are probability based with the probability of certainty of knowledge of either true or false asymptotically approaching zero.

Then there is the whole set of classes of systems where the halting problem comes into play – fractal based problems, non-computable problems, problems where the nature of computability is uncertain, etc.

So AI will certainly add interest, and it isn’t necessarily going to make noticeably greater progress on the more interesting problems than we humans are making. The term “singularity” is a little “over-hyped” – and certainly it will be interesting.

When we have fully functional molecular level manufacturing the concept of trade really loses any meaning. What possible reason would anyone have to exchange anything?

There are many ways in which the whole idea of markets is and must come to an end. Control by money has a large and exponentially growing set of problems with it, and a reducing set of useful outcomes.

The question then becomes, what alignment and agreements can we generate to create interesting interactions and possibilities together?

[followed by]

Hi Dobermanmac

Most of what you write I agree with, with a couple of notable exceptions.

I’ve been thinking about this stuff for 41 years. I have a fairly active mind. I can imagine quite a bit. My imagination doesn’t appear to have any bounds, only current limits of exploration.

No one can keep up with all technological and thought breakthroughs. Anyone who thinks they can is deluding themselves.

My concern is not technology as such, it is the systemic response to technology.

There appear to be infinite possible levels of awareness.
It appears that any level of awareness is capable of trumping any other level of awareness in specific circumstances.

Neither awareness nor technology is any absolute guarantee of survival, and both do seem to increase probabilities.

In the widest of strategic senses, it seems clear that cooperation has the highest probability of long term survival.

People can adapt if they are given appropriate contexts.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in economics, Ideas, Our Future and tagged , , , , , , , . Bookmark the permalink.

2 Responses to Over-population?

  1. anniepani says:

    All these theoretical considerations are fine but in my own life the population of the Earth has tripled.

    Liked by 1 person

    • True, and it is all happening in poor places.

      Places where people have real choice, and are not dealing with simple survival issues are already below replacement.

      So yes – real issue, and the solution to it is universal wealth, not continued poverty.

      Like

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s