A link from Foundations of Logic Group – NEWS Deadly Incurable Illness Spreading Across The Globe Infecting Nearly A Million People
Just one more of the many examples where market incentives are directly at variance with human survival. It is in the interests of drug companies to sell more drugs, not to keep stores of them only to be used in dire emergencies. Hence we keep breeding resistant strains.
We need to go beyond markets, and soon, if we wish to survive.
If really is getting that simple and that urgent!
Agree in part only.
Yes markets do measure a particular type of value, value in exchange. As such, they give a real time measure of the arbitrage of the raft of different values that individuals bring to a market place.
So to that degree, and to that degree only, I agree with you.
But that is most certainly not all that markets do.
Markets are a part of a very complex stack of systems of value, and as such have some degree of influence on the valence of every set on systems in that stack.
For many entities, the abstract measure of value that markets use (the myth of money) is the major valance influence.
To the degree that entities use money as a planning metric, as a proxy for all forms of value, then to that degree markets have a huge influence on systemic and human behaviour at many different levels.
The idea that the measure of value that markets deliver (money – value in exchange) is a reasonable proxy for human value more generally, is losing cohesion to the degree that automation is taking over the production and delivery of goods and services.
It is a highly dimensional problem space – potentially infinitely so.
We all have many different levels of values.
We have survival values – food, water, shelter, clothing.
We have various genetically imposed values, like sex, and various aspects of sociality, and various preferences for taste, smell, texture, pattern, security, etc
We have a potentially indefinitely extensible set of preferences in different contexts.
All of these will be in a context sensitive hierarchy – so the more hungry we are, the higher finding food becomes in our hierarchy of values.
A market arbitrages all values into a single metric – money.
If people only have enough numbers to meet a very small subset of their needs, then the ones met will tend to be the most basic of survival needs, and all others will become subservient; but past a critical value, even this fails, and people go to very low level genetic approximations to what would have done that on average over time for their ancestors.
For those people who have enough numbers to meet all their survival needs, then money can become quite a trivial thing, with little real impact or value in their lives; and that fact can lead to a great deal of perceived injustice.
In our past, most people could develop skills which had value in exchange for other people.
Automation has changed that.
Now very few people (and with a decade no-one) will be able to deliver any good or service for less energy than a fully automated system.
Most people then have no way of generating numbers (money) to use in a market based system.
We need the automation, we cannot feed everyone without it.
Yet the systems of exchange that were a reasonable proxy for value more generally fail under a context of fully automated systems.
Von Mises was a smart guy, and he developed some very powerful ideas. What I am talking about is way beyond anything he ever conceptualised.
Full automation means being able to have programmed machines doing every part of the system from mining to manufacturing to delivery to operation to recycling, including all the improvements to design and coordination involved in that.
What it means is no requirement for human involvement. If someone wants to be involved, fine. If they don’t, then that works too.
Google is probably further down that path than any other non governmental organisation, but lots of others are there too, IBM has probably been at it for longest. You will have seen Watson win at Jeopardy.
Who knows exactly what governments get up to with their black operations. I have met a lot of very smart people in the 50 years I have been programming computers and mixing in many top level groups. And while very few of them have told me any details of what they have been up to, quite a bit can be inferred from the gaps.
When you master the skill of sitting in a cafeteria or restaurant or bar or work-space and listening to 12 different conversations simultaneously, then you can learn quite a bit quite quickly.
It is no only imaginable, it is already real within limited domains.
The machines can already maintain and improve better than we can. Right now, they are still more expensive than we are to build, and that is changing rapidly. On current trends, which have been stable for over 100 years, there will not be any jobs left that humans can do better and cheaper than machines by 2035 (all lifecycle costs considered).
For most people, we have already past that point.
Social inertia, and some level of choice by many individuals, is holding the existing system together; but the internal pressures are building exponentially.
We need as many people as possible, as aware as possible, as quickly as possible; and prepared to think and act independently, for their own and the group that is humanity’s survival.
This is a problem like non-other in history in a very real sense; yet in anther sense it has some analogies to some of the messages deeply captured in many religious traditions (and I am a self declared functional atheist – more of an eclectic humanist/trans-humanist).
Yes – very challenging. Has been challenging me for some 30 years.
I am not talking about a pure machine society.
I am talking about a society that has an expanding range of biological and non-biological sapient entities, and a range of entities that are a mix of both, to varying degrees – humans to cyborgs to AGIs.
And most of the automation will be non-sapient, non-sentient – just machines programmed to do what they do, without any particular intelligence.
Part of being a transhumanist is acknowledging that every sapient entity, human and non-human, biological and non-biological (and anything in between) have the same rights to individual existence and individual freedom, and the same demands for responsible actions in social and ecological contexts. No right exists without a responsibility, two sides of the same coin – always.
[followed by another sub thread – can we control evolution?]
Humanity is now shifting into a new level of problem space that has no historical precedent.
Technology is evolving at a rate that is on a double exponential, and is creating systemic changes that has nothing directly equivalent that those on the “conservative” side of the spectrum (and we all have our conservative aspects) can look to history for direct guidance. There are some analogies that are workable at some levels (and Jordan Peterson explores some of those quite well), but nothing direct.
Nature doesn’t “always find an equilibrium” (much of nature is actually far from equilibrium systems that are constrained in some set of ways that give the illusion of equilibrium).
Extinction is actually extremely common – it is the normal fate of most species – way over 90% of species go extinct if you look at the geological record.
So don’t look to “nature” for security – you will not find it.
We need to go well beyond any of the systems currently in existence if we want a reasonable chance of living a very long time.
It is a class of problems well beyond what most people are willing or able to consider in detail.
There is no way to escape all problems and risks; and there can be an infinite path of risk discovery and mitigation.
I like the quote from Helen Keller that goes something like – security is mostly a myth, life is either a daring adventure or nothing at all.
To a degree I align with that.
If life becomes just about mitigating risks, then we lose much that it is to live.
Existence cannot be without risk, and we can learn how to mitigate the worst of the risks from our past, even as we learn about new risks in the future.
The classes of risk we can become conscious of seems to be potentially infinite.