On Jordan Peterson’s recent comments

[ 12/November/21 – 12 11 21 palandromic day]

I agree with a lot of what Jordan says, it is complicated, and there are no low resolution models of it that work in all contexts, and there are solutions, and both Jordan and Bill have parts of them.

We do need cooperation, at scale (all scales, all levels and classes of agent), and cooperation is nothing like domination.

One of the pre-conditions of cooperation is actually using our automated systems to meet the reasonable needs of everyone for things like clean water, nutritious food, adequate housing (adequate for all classes of risk – storms, fires, earthquakes, tsunamis, etc – that doesn’t necessarily mean resisting each of them, but it does require effective escape plans for all that cannot be resisted), education, communication, transport.

That will ensure that new people do not have brains crippled by inadequate nutrition during development; and there are other things than nutrition that can cause severe developmental issues, and they need to be addressed also. So, as Jordan says, it is “complex man!”.

And he is far more correct than even he suspects about the need to value individual life, and he is correct that part of that is valuing group systems of risk mitigation, and both of those contain multiple levels of deeply complex systems, nothing even remotely simple, ever, even in theory – eternal evolving complexity is a necessary attribute of the kind of complexity that we are (it is actually a systemic necessity that we must eternally be more complex than we can know – and going down that rabbit hole can be a bit of a mine bend the first time one does it, and it needs doing, repeatedly, with every new class of tools for assessing and managing complexity that one becomes familiar with).

So Jordan is right, the concept of “Rights” and our “Economic systems” are both extremely complex, and perform multiple levels of essential functions; as do many of the deeply complex levels of social systems, in their checks and balances on the worst of the pathologies inherent in the very concept of systems (in so far as systems are always and necessarily an over simplification of the complexity present, and therefore always necessarily inappropriate in critical ways in some contexts, and personal responsibility is required from all of us if we are sufficiently confident that we are in fact in one of those contexts – at whatever level of abstraction one is able to achieve, this is always the deepest level of responsibility – and I like the multileveled way in which Jordan has explored lower integer levels of manifestations of this in cultures, and it seems to be capable of infinite levels, though human brains have issues even with double digit levels of abstraction, let alone 3 or 4 digit levels.

So to me, climate change is trivially easy to solve in a technological sense, but only once global cooperation between all levels and classes of agents is achieved. Without such cooperation, any “solution” is worse than the original problem (which is something Jordan quite clearly expresses – he just doesn’t see any usable path to such cooperation, I do).

And that is difficult because there is no way to do that when critical systems are based in competitive market values – as markets necessarily devalue any real abundance to zero, and we absolutely require a real abundance of all critical factor (we have it for air, and the market value of air is zero; which means that market measures do not value that abundance, and cannot deliver any more like it, not in and of their own internal incentive structures).

And to be explicitly clear, I am not advocating any level of central control or hegemony. I am being explicit that survival requires cooperation and respect between diversity at and between each and every level.

So we absolutely require fundamental change to economic and political systems, and we need to take all of Jordan’s warnings about the dangers of such things very seriously, and consider that such dangers may in fact be far greater than even he realizes, and still be necessary for long term survival.

So NOTHING SIMPLE – and it is doable, and it is necessary, and it does in fact seem to be what is required of us all, at present, each to the best of our individual limited and fallible abilities.

Posted in Ideas, Our Future | Tagged , , | Leave a comment

Markets and values

[ 10/November/21 ]

The responsibility to take reasonable action to ensure the life of everyone always outweighs any other personal desire for freedom.

Freedom is a vital part of being human, but it must have responsibility if it is to survive long term, and it is second to life itself (and both of those things need universal application).

The automated productive capacity available today is easily able to meet the reasonable needs of every person on the planet for clean water, food, shelter, education, transport and communication. The reason that we don’t is down to the incentives of markets, and the fact that the market value of any fully met need drops to zero (think of the market value of the air we breath).

Markets only work when there is genuine scarcity.

Today we are creating scarcity to allow markets to function. That is not a stable solution to this very complex problem.

If we are to retain markets, then we need mechanisms to ensure that everyone always has enough money to have all their reasonable needs met. If they want more than that, then they have to play the market game. That is doable.

It needs to be done.

Sooner, rather than later.

Posted in economics | Tagged , , , | Leave a comment

Will the world end

Will the world really come to an end?

[ 9/November/21 ]

Provided that humanity generally can understand that:
all levels of the evolution of complexity are predicated on new levels of cooperation, and
cooperation (and the levels of complexity based upon it) requires evolving sets of cheat detection and mitigation systems to survive, and
that freedom is essential, and the every level of freedom demands appropriate levels of responsibility if they are to survive; then
given the levels of technological development in the last 200 years, and what seems likely in the foreseeable future, it seems entirely probable that the earth will still be around to the end of eternity.

As one simple example of how that might happen, it is not that difficult to use a “gravity tug” to move the orbit of the earth (if one has a few million years to do so).

A long term future is entirely possible, if and only if, we generally understand that competition is only survivable if firmly built on a cooperative base.

The idea that competition can solve any problem is pure myth, unsupported by either logic or experimental data. It is essentially a lie told to help sustain a cheating set of strategies.

What is required is cooperation between multiple levels and sets of diverse agents. Any attempt to impose any sort of singular rule has more dangers than benefits.

And all systems need to be cooperative to the point that they ensure the lives and liberties of all of the agents within them, and all liberties come with tests of reasonableness and responsibilities.

So it seems to me entirely possible that the world will last a very long time, much much longer than the sun.

Posted in Our Future | Tagged , , | Leave a comment

What could end world hunger?

What could possibly end world hunger?

[ 8/November/21 ]

Would be nice if it was as simple as Just Dan’s answer, but it isn’t.

Sure, we produce enough food to feed everyone now, and even in the wealthiest and most capitalist countries there are hungry people.

Markets and capitalism are not solutions to all problems, and they are better than any centralized system, and were until quite recently a reasonable approximation to an optimal solution – but advanced automation has fundamentally broken the utility of market measures of value as a generalized planning metric.

There are now fundamental issues with our monetary systems that must be addressed, and they are not at all simple, and any attempt at a simple solution will fail in a similar fashion.

Corruption is a problem, at every level of society.

People generally need to appreciate that cooperation is fundamental to every level of complexity, and any level of competition that is not firmly based in cooperation, will necessarily fail.

And cooperation is vulnerable to failure from multiple levels – from the simple “free rider problem” to vastly more complex and subtle forms of exploitation – and all are at some level “cheats” on the cooperative. Thus every level of cooperation requires evolving ecosystems of cheat detection and mitigation systems.

If you think about our bodies, we are each a vast cooperating colony of cells. We have a name for any of those cells that switch to pure competition – it is cancer, and if left unchecked it is fatal to the cooperative (us). And not too many people understand that most of the cells in that cooperative are not genetically related to us – they are bacteria – and they are required parts of the cooperative that is us.

Evolution tends to punish the slowest far more harshly than the slightly inaccurate, and that fact gives us all a strong set of biases to prefer simplicity over complexity if at all possible. And some things are irreducibly complex, and have no simple answer.

Market based capitalism is not a stable solution to maintaining human society long term, and it was better than any form of central control.

Whatever we develop and deploy to replace it, has to be similarly decentralized, has to accept and respect diversity (all levels) and has to ensure that all individuals have reasonable levels of security, resources and freedom.

And on the subject of freedom, it has to come with responsibility.

Freedom without responsibility is always destructive, all levels.

So we find ourselves in a time of profound change, where modern automation has fundamentally changed the context that made markets a reasonable approximation to an optimal solution. And we absolutely need modern automation to be able to create mitigation measures for a set of risks that most people are probably better off not thinking about, but are already well characterized for those with an interest in such things. So it is market capitalism that has to change – no ifs buts or maybes about that – if we as a species want to survive long term. And to be explicitly clear – this is not, in any way, to be taken as any sort of endorsement of any form of centralized systems. Long term stability demands decentralization at every level, and we do need cooperation and coordination (all levels). So it is deeply complex.

Posted in economics, Ideas, Our Future | Tagged , , , , , | Leave a comment

Major risks in the 3 waters reforms.

[ 6/November/21 Local Body Governments have failed with water infrastructure …. people died in Havelock North, sewerage spewing into harbours around the country and onto Wgtn city streets …. 3 Waters – drinking water, rain water and shit No Brainer, give over to Crown management ..]

Sorry Ngaio – this is not accurate.

Certainly, there have been some instances of failure of water systems at local level, and people have died as a result. But it wasn’t a failure of the entire country, and not everyone was affected.

Certainly there are multiple levels of failure to effectively maintain critical infrastructure.

These are real issues, but, central control is not a stable answer.

It doesn’t matter how good the people and systems are at the center, they will be limited by the amount of information they can process in the time available. That limit forces them to simplify the information flows, and inevitably critical information gets lost.

So centralisation is a high risk strategy.

What actually works in practice is coordination, cooperation, sharing of information and techniques, and empowering those at the “sharp end” (those actually dealing with physical reality in real time) to make the best guess they can, with the best tools and techniques they can reasonably have.

We do not need central control.

We need empowerment and coordination,

We need systems that promote both creativity and responsibility, all levels.

We need to accept that some degree of risk is an eternal part of life, and we need to reduce risk to a reasonable minimum where that is reasonably possible. And reality seems to be sufficiently complex and fundamentally uncertain that there will always be uncertainties in making such decisions, and we will all have to use a test of reasonableness, and there will always be mistakes.

I agree that we need to do a lot better in many instances, and in many instances the issues have come about because people have been following some inappropriate set of rules, rather than using their senses and acting responsibly.

We have got to stop punishing people for mistakes, and accept that mistakes are an essential part of any process of exploration and learning (any and all levels). Sure, we can’t let people make the same mistake too often – some degree of learning and responsibility is required – and it is deeply complex.

So while I am acutely aware of the need to do much better in the “water” space; I see the approach of central control as the worst possible response, one that essentially guarantees failure, and also guarantees that such failures will not be acknowledged or learned from.

It is much easier to recover from small mistakes that effect small groups than it is to recover from total system failure.

I see no evidence of our centralised bureaucratic systems understanding the problems of managing complexity sufficiently to trust them with this essential function.

We need to do a lot better than this. It does not seem to me to be a real solution to a very real problem. It is actually just more of the problem, but at a bigger and more dangerous scale.

Posted in economics, Our Future | Tagged , , , , , | Leave a comment

On Sowell on fairness

[ 6/November/21 Dirk Shared a Thomas Sowell quote “Since this is an era when many people are concerned about ‘fairness’ and ‘social justice’ and what is your ‘fair share’ of what someone else has worked for.”]

Many people throughout history have realised that their existence and their ability to produce is predicated upon the cooperative efforts of many others in their past and present.

That realization, that it is cooperation that is fundamental to the existence of complexity such as we are (at multiple levels), is foundational to multiple levels of the concept of “fair”.

The idea of “fair” is often taken beyond its limits.

The useful limits of the concept of “fair” are about reasonable levels of security, and reasonable levels of freedom and access to resources.

And all notions of “fair” come with notions of “responsibility” if they are to be survivable.

So an economic system that positively reinforces most slight asymmetries into massive asymmetries is not a stable or survivable system.

There does not need to be any limit on what anyone can produce or own, provided that they necessarily contribute back to the cooperative that gave them the asymmetries that they have. Everyone builds on the shoulders of the giants who came before, and who stand around them. Way over 90% of what we each are is the result of the efforts of others, not us.

We do have real creative abilities, and those are extremely important, and if we are honest about how much we create, vs how much is given to us, we all get more than 90% from others, from language, from culture, from the accumulated wisdom of science and technology.

So the idea that anyone’s creations are entirely their own, is an over simplistic nonsense; and yet our freedom and our creativity are incredibly important; and they have to come with responsibility.

Any level of cooperation is vulnerable to exploitation by cheating strategies unless effective detection and mitigation systems are in place. At their best, culture, law, politics, education, healthcare, morality, are contextually appropriate instantiations of such detection and mitigation systems. At their worst each can be captured by cheating strategies.

At its best, taxation is a way of supporting the necessary sets and levels of systems required to sustain the levels of cooperation that make complexity, such as we and our social and technological systems are, possible.

At its worst, it is captured by some level(s) of cheating strategy and used for exploitation.

Our role, as responsible entities, is not to fight against necessary taxation, but to ensure that we are part of the solution, part of maintaining cooperation, part of detecting and mitigating cheating on the cooperative, at whatever levels we are recursively able to conceive and participate in.

And what is appropriate is always highly context sensitive – complex systems have that unsettling attribute.

At every level, we each have a personal responsibility to be part of detection and mitigation systems against cheating; at the same time as we each have personal responsibilities to protect life and liberty (our own and others, and all such things have contextual tests of reasonableness). At the deepest of systemic levels they are the same thing.

And it is deeply complex, and any attempt to oversimplify it will result in some form of pathology.

And it seems beyond any shadow of reasonable doubt that we all, necessarily, have to simplify the complexity within which we exist to make any decisions in any reasonable time. So we are all, necessarily, dealing with eternal uncertainty.

All any of us can do in practice is make the best guess we can after such consideration as we can reasonably manage given the information and time available to make any particular decision.

Taxation is a necessary part of maintaining complexity, and it is no guarantee that the systems we are funding are free of cheating strategies – we all need to use all the suites of social and political systems available to us to make our best endeavours.

There is no requirement that life be simple, only evolutionary pressures for us to create simple understandings of it.

[followed by taxation is just another kind of theft]

No Dirk,

That is a fundamentally wrong simplification of an extremely complex reality.

It is over simplification to the point of invoking existential level risk.

It is deeply more complex and essential than that.

You must be able to see that.

Try to imagine existence with no roads, no infrastructure, no rules, no police, no army, no technology (developing complex technology requires the sorts of agreements over time that legal systems require, and legal systems are funded by taxation – as one small part of a very complex reality).

[followed by]

Hi Dirk,
Actually look at where you see private infrastructure.

Look at Somalia.
Essentially no public system of laws, just powerful individuals looking after their own short term self interest.

Is that really a model you want to follow?

Do you really see modern infrastructure and security in that place.

Get REAL !!!

There is always, necessarily, a requirement for cooperation to build real security and real complexity.

That always, necessarily has systems that require maintenance.

Taxation is one manifestation of such necessary costs.

Accept that.

Having accepted that, also accept that raw cooperation is always vulnerable to exploitation, and thus requires evolving ecosystems of cheat detection and cheat mitigation systems. At their best all institutions embody multiple levels of such things.

And all things have failure modalities.

The most common failure modality is over simplification of the irreducibly complex.

Cooperation does not mean giving all you have to someone else.

Cooperation means making reasonable, contextually relevant, efforts to ensure that essential systems do in fact meet the reasonable needs of all levels and instances of agency. And that is in fact deeply complex in today’s reality.

Without any form of cheat detection and mitigation systems, all systems devolve to some form of survival of the most powerful and most exploitive, which eventually leads to total system failure, because when you do actually look very closely at how complexity evolves, at every level it is actually based upon new levels of cooperation, and they always require evolving ecosystems of attendant systems for survival. That is just reality. No point in trying to argue with it, you will lose, but only 100% of the time.

So there are no shortage of pathologies into which institutions can fall – and it is our job as individuals to identify those to the best of our limited and fallible abilities.

And we all have a fundamental responsibility to ensure that any competitive games that we do engage in are firmly based in cooperation to ensure the survival and reasonable prosperity of all. Nothing else actually survives long term, ever.

Posted in economics, Philosophy, understanding | Tagged , , , | Leave a comment

On risk mitigation

[ 6/November/21 ]

The necessary corollary is that unless we transform our society into one that has a fundamentally cooperative base to all competitive games, then we will self terminate (one way or another – AI is just one of many such risks).

There simply is no way for creative complexity to survive all out competition (not long term). We are too good at creating destructive implements, and it is far easier to destroy than create.

The more people that get this, that it is fully recursive, all levels, always; the greater the probability of our long term survival.

AI can help with that, and it is definitely not a default output of market based incentives.

[followed by]

Robin Indeededo

Cure = return system to long term survivable state.

Cancer = any state within a cooperative of cells (body) where some subgroup of cells stops communicating with neighbors about appropriate limits to growth, and starts consuming all available resources for reproduction.

As quickly as possible = developing a full understanding of all the nuances of cellular communication within cooperating cellular colonies that are individual human being, cataloging all the systems, identifying all the actual and potential pathologies to such communication that lead to “cancer”, creating effective detection systems for each of them, and then creating mitigation strategies that effectively return all such cells to a cooperative state and lead any excess ones to invoke apoptosis (cell suicide for the sake of the colony – many pathways to invoking this when the cell is no longer required). Then deploying such systems at scale so that they are readily available for any and all who need them. Ultimately this will involve a suite of nanotechnology working cooperatively within our bodies searching for any such occurrences.

It is a necessary subset of the suite of systems required for indefinite life extension of individual humans.

All of this was obvious to me as I completed undergrad biochem in 1974.

The really difficult question was the one that occurred immediately after I realized that indefinite life extension was actually a reasonable probability:
What set of social, political and technological institutions are required to give potentially very long lived individuals a reasonable probability of actually living a very long time with reasonable degrees of freedom and resources?

I have spent over 40 years exploring strategy spaces for stable answers to that question.

I have proven to my satisfaction that there are no solutions in any competitive space (nothing that even approached 0.1), and there are no absolute solutions in any real space – all things are expressed in terms of probabilities.

And there do appear to be solutions in cooperative spaces that are many orders of magnitude more probable than anything competitive (well over 0.9).

So solving “cancer” as in establishing and maintaining reliable communication at the cellular level, is a fundamental precondition of indefinite life extension, and with Alphafold2 in the modelling mix it is now entirely solvable.

Posted in Our Future | Tagged | Leave a comment

Critique of Keller on faith and belief

Al posted a quote from Timothy Keller

[ 5/November/21 ]

Keller is not being as accurate as is required in his writing.

All of my views are probability based, and I didn’t start out that way. Like everyone I necessarily started with the biases inserted by biology and culture, and those mixed with experience, evaluation and choice, got me going with a very simplistic model of reality.

And faith (like most words) is a word that can have many variations of meaning. At one extreme it can mean accepting that something is so in the face of any and all evidence; and at the other end of that spectrum it can be a kind of operational confidence derived from repeated effectiveness.

The idea of something being “provable” has a similar sort of spectrum. If someone assumes that the simplest of all possible logics is how the world works, and that some set of states can be known with absolute certainty (neither of which accord with the observations of modern science), then one can “prove” something with absolute certainty.

The other sort of proof is probabilistic, and accepts fundamental uncertainty in all things, and then tries to find the greatest degree possible of operational confidence using the best tools and the best available datasets. Considered in this manner science is not about “Truth”, but is rather a sort of eternal process of successive approximations to whatever it is that objective reality might actually be. Science then is an eternal process of becoming less wrong as circumstances reasonably allow.

So Keller’s piece above sort of has an inkling of something, but so oversimplifies the complexity actually present that it ends up being essentially wrong, even as it contains some elements that are very useful approximations.

And I do agree with the conclusion, that all world views are approximations to some degree, and there is a need to respect any that are not actually a significant threat to the life and liberty of anyone, and that all liberty has to have limits, and that liberty without responsibility is destructive; and that there is therefore room for discussion about what responsibility looks like in any particular context.

And for me, the concept of God(s) has outlived its usefulness, to about the same degree as the concept of the sort of value measured in markets being a useful proxy for human value more generally.

And I do accept that all complex systems need morality for survival; but that does not require Gods, it can be derived from the complex strategic mathematics of the survival of complex cooperative systems in fundamentally uncertain contexts (like this universe we seem to find ourselves in appears to be).

[followed by in response to some comments by Mark Maney]

How did I establish probabilities?

Initially I accepted the notions given to me by the genetics that gave me this body and brain, and the culture I happened to get born into, and the language I learned.

As I studied more and more, read more, tried out more things, it became obvious that many of the things I had accepted as true couldn’t possibly be true – as they contradicted other things. So I spent quite a few years digging into various ways of arranging and interpreting things trying to work out which one was right.

After a while it became clear that it was far more probable that all of them were essentially wrong in some key aspects than any of them were actually right.

It also became clear that we are computationally limited entities, that have to live in, and respond in, the “real world” (whatever it actually is) and that situations often arise where the time and energy available to make decisions are strictly limited – so simple ideas that normally work have a great deal of survival value, and will tend to be strongly selected over time, provided that they do in fact usually work.

So there are multiple levels of pressure on us to have and to use the simplest most reliable model we can in the context we find ourselves. What seems “simple and reliable” will vary a great deal with context.

I was a particularly geeky kid, with strong interests in how things worked, in biology, physics, math, chemistry. I started university at 17 and went straight into second year biochemistry. Started to get into computing. Read a lot. All the time reviewing and updating my synthesis and critiques of all that I had read. Developed an interest in how brains work, then in the field of artificial intelligence. A lot of really complex mathematics, and really abstract ideas about the nature of computation and the nature of experience.

Somewhere in that process (I can’t recall exactly when) it became clear to me that the classical notions of Truth and Causality seem very probably to be often useful simplifications of whatever it is that reality actually is. The evidence sets I have seem to clearly indicate that reality is so complex and fundamentally uncertain that all any computational entity (human, AI or other) can possibly have is some useful approximation – some simplification.

It is, in a very real sense, the opposite of faith.

[followed by]

No – not basing understandings on testimony, on the evidence presented.

I was an unusual student.

Not many of my teachers appreciated my questioning everything they presented, and asking for the evidence (some, but not many).

There is a sense in which everything has a degree of trust in it, and the best trust is backed by evidence and experience. One does need to trust enough to try something, but not so much as you ignore evidence.

I did agree with Kellers final point, that we should cut others a bit of slack; but not with the way in which he derived it 😉

[Followed by]

But that is not what I did.

I used the books as guides for where to find evidence, then I did the tests myself.

I used all of my senses to gather information.

I didn’t necessarily do all the tests that others reported. If they were important, then I would test enough myself to give me confidence in the rest of what they said – and much of the critical stuff I tested myself.

I use testimony as an indicator of where to test – not as evidence necessarily.

I fully admit I was unusual in that.

I really annoyed more than a few of my tutors.

[Followed by]

I don’t have probabilities of 1.

Doesn’t seem to matter how often I write that.

Actually, we don’t need to believe in anything.

It is perfectly possible to live with uncertainties in all things.

I get that notion is unfamiliar to many, and unimaginable to some; and that doesn’t actually make it any less real.

Posted in Brain Science, Nature, understanding | Tagged , | Leave a comment

Milton Friedman quote on markets

[ 5/November/21 Dirk posted a Friedman quote “The most important single fact about a free market is that no exchange takes place unless both parties benefit”]

Kind of true, to a limited degree.

There are real issues around systemic asymmetries of benefits from exchanges; and there are also foundational issues when it is possible to generate universal abundance, as there can be no profit based incentive to do that currently.

And both of those factors are hugely important now and going forward.

And do not let my criticisms in any way give the impression that I favour any sort of central control – I don’t.

I like the decentralised aspect of markets, but the scarcity based nature of the value measure delivered by markets needs to be updated to give a positive value to abundance (instead of the negative value currently returned). Aspects of the money generation systems need to be updated also – but that is another very complex area. So fundamental reform is required, and it is not simple.

Posted in economics | Tagged , | Leave a comment

Daniel S asked about solutions to existential questions

Daniel S asked Any good solutions for how to avoid rampant catastrophes with the decentralized destructive capacities that continued exponential technology empowers…that don’t involve ubiquitous surveillance?

[ 3/November/21 ]

Not quite sure what you mean by “ubiquitous surveillance”.

The ancient adage – “the price of liberty is eternal vigilance” – is a form of looking.

If we want liberty, then we all have a responsibility to keep looking, both within and without, all levels.

Exponential technology also empowers our ability to look, and to notice what needs to be noticed, and we need that.

I am certainly no fan of any form of centralized universal surveillance, and we do need decentralized distributed vigilance. There really is no substitute for accurate timely information when it comes to making useful assessments of risk – you, above all others I know, must be conscious of that fundamental aspect of any sense-making infrastructure.

[followed by]

Hi Daniel,

Thanks for the clarification.

It seems clear to me that there can be no absolute guarantees in any of this, and the best probabilities come from multiple levels of independent trust networks, and operate with multiple levels of “safe to fail” experiments.

Every level of cooperation requires cheat detection and mitigation systems, and they all require eternal exploration of new strategies; so become evolving ecosystems within themselves – every level.

Infinities have that unsettling characteristic of having no end. However much one has explored it is as nothing compared to what remains unexplored.

There is no possible process that works in all instances.

We must all shoulder the burden of responsibility, eternally (or at least for as much of eternity as we manage to survive in). There just does not seem to be any other logical possibility (not in any of the classes of logics I have explored).

And we can certainly get way ahead of the common sets of issues; and the uncommon will always be there to bite us from time to time.

A highly dimensional structure of interlinked chains of trust and communication seems to be the most secure model possible. And the greatest threat to sapient life seems to me to be an over-reliance of markets as valuation tools – and we have spoken at length about just how deeply complex that particular issue is.

Education, genuine respect for diversity, for life, for liberty, a high basic standard of living guaranteed to all, and an acknowledgement that freedom always requires responsibility if it is to survive very long at all; all seem to be necessary aspects of a survivable future. The old notion that markets can solve all problems has been falsified. And in the absence of modern automation they were a reasonable tool; but no longer.

It is a deeply dimensional problems space.

[followed by]

Hi Daniel,

I have some fundamental arguments with Nick.

I can agree to a commonsense idea of truth, where it ought more technically to be called something like “usually reliable useful approximation to whatever the objective reality actually is” – so wont push that one any further.

I have a real problem with his definition of “Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm”. Any useful bit of information or technology meets that criteria. Example, if you know when someone you want to kill is walking close to some road, you can time your passage down the road and use the car as a weapon. Another example, any competent chemist can create a decent explosion from the contents of most kitchens (but most don’t).

Defining anything that broadly condemns us all to eternal ignorance.

Nick states “It may be worth stressing, however, that even if one has an extremely strong intellectual commitment to truth-seeking and public education, one can still legitimately and in good conscience explore the question of how some knowledge might be harmful. In fact, this very commitment demands that one does not shy away from such an exploration or from reporting openly on the findings.”
Any and all knowledge can be harmful or useful.
Knowledge and technology in and of themselves are morally neutral; it is what we choose to do with them that matters.

I can certainly go along with the idea that people need to demonstrate levels of responsibility before being given levels of freedom. And that is a deeply dimensional questions space for which there is no simple set of systems or procedures that deliver reliable results. An element of eternal vigilance is eternally required of all of us. Trust is required, and so is a certain amount of testing. Trust without verification will be exploited – we are back into the ever evolving ecosystem of cheat detection and mitigation strategies and systems required for the existence of cooperation and complexity.

Almost all of Nicks examples are of a competitive nature.

What he fails to appreciate is that it is the very notion of competition that is the threat, not any particular set of knowledge.

He has failed to appreciate that all systems based in competition necessarily drive both freedom and complexity to some set of minima on the available “landscapes”. Freedom and complexity can only survive and prosper in fundamentally cooperative spaces.

It is almost as if he is deliberately trying to hide this piece of information, as if it was in some way itself a threat, rather than a counter to threats.

It is the very idea of fundamental competition, the very idea of enemy, that is the fundamental hazard.

There is only one survivable game in town, and that is fundamental cooperation; and that has risks, and it requires evolving ecosystems of cheat detection and mitigation systems; and it is many orders of magnitude safer than any competitive approach. The logic of that is undeniable.

I am intimately familiar with being ostracised and bullied as a nerdy kid who knew too much, but I just hardened up and kept on learning. Didn’t take long for the bullying to stop. I was much more useful as a friend than as an opponent (and they all knew there were things I would not do).

His entire health insurance hazard is an example of the failure of the concept of markets, not a failure of information.

I just got fed up with reading it at page 14. Every example was based in competitive games – and there is no high probability survival for anyone in that space.

The only chance any of us have for long term survival is cooperation. And as a necessary first step that means meeting the reasonable needs of everyone for the necessities of life (food, shelter, water, safety, medical care, information, communication and transport). That isn’t actually that difficult.

So No.

Nick seems to have “gone over to the dark side” with that paper.

I acknowledge that all freedom demands responsibility, and that at each level there is a reasonable need to demonstrate responsibility.

I acknowledge that some things are just not safe to have around (like nuclear weapons, and various classes of viruses and other organisms).

But beyond that – once we have demonstrated reasonableness and responsibility, then we all need access to accurate information – all levels. The current competitive, lying, thieving game-space is not survivable; and I for one want to survive if at all reasonably possible – and indefinite life extension seems to me like not only reasonably possible but also relatively easy to deliver within the next 15 years.

[separate thread – reply to Bruce Dayman]

Hi Bruce,

You are not exactly wrong, it is just a deeply more dimensional problems space.

We have multiple levels of awareness present, and it seems probable that all that will happen is that the number of levels present will only increase.

We may be able to increase the number of people at higher levels, and that will require a lot of effort to give people generally sufficient time and interest to make that journey. And part of that will be a complete rework of our educational systems.

We certainly need to make some substantial alterations to most of the social systems present, and in doing that we must have diversity. The very idea that there is only one true or right way is the greatest danger. We need diversity, all levels – and the will be hard for the more conservative (all levels).

And I am clear that our existing systems are deeply more complex than most people’s conception of them, so it is not all bad; and automation does change many of the fundamental assumptions underlying many levels of our existing systems.

Posted in Our Future, understanding | Tagged , , | Leave a comment