Money Free Party

Money Free Party Facebook Post

I chaired a meeting of Candidates at which Charlotte spoke very well. Richard posted the above.

Agree that we have the technical capacity to provide those goods and services to everyone.
Agree that such is a desirable outcome.

Where we disagree is in the notion that it is simply a matter of stopping using money.
Money and markets do actually perform many levels of very complex and very important functions in our society.
We need to have alternative systems instantiated and tested, before we dismantle the monetary system more generally.
That is not a trivial exercise.
It is doable, and it contains many levels of very complex problems, related to levels of communication, trust and governance – and how those are effectively safely networked and distributed.

To me, we align on the need to reduce the impact of money on decision making, but not on the minimum necessary conditions for transition, nor necessarily on the need to distribute decision making at all levels.

To me, some sort of Universal Basic Income (UBI) would seem to provide a necessary transition ground from where we are now, to where we need to go.

And it is a profoundly complex set of problems, with at least 5 levels of abstraction present that I have thus far identified.

Not simple – even remotely.

And certainly required, if any of us want a significant probability of living a long time with reasonable degrees of freedom.

Posted in Philosophy, Politics | Tagged , | Leave a comment

What lifts you up?

On the Rise

Mentally, emotionally, or spiritually, what is it that lifts you up?

Communication – as in a concept in one mind transmitted to another mind.

Posted in Laurie's blog | Tagged | Leave a comment

Dangers of wishful thinking

Dangers of wishful thinking.

Hi Justin,

While I largely agree, you clearly haven’t explored the details of my worst fears, as all of them involve extistential risk for humanity, and do not involve the option of getting up and trying again. Failure is not an option that one can recover from in my case.

Thus I am persistent, in the face of little agreement, in promoting the idea that the single greatest risk facing humanity is the very notion of valuing things in markets, of exchange value, as it prevents the sort of universal abundance and freedom and security that fully automated systems are capable of delivering to every person alive.

And such freedom comes with responsibilities – to act responsibly in both social and ecological contexts for example, which means limiting family size – preferably to one child; as the possibility of indefinite life extension comes very close.

So yeah – wishful thinking, if not combined with planning and execution in reality, isn’t a reasonable or responsible behavioural modality, and it is certainly preferable to the sort of post modernist nihilism that some display today.

Posted in Philosophy, understanding | Tagged , | Leave a comment

JBP – Joseph Campbell – Cycles

JPB – Gary Kirby – Joseph Campbell – What I believe

Joseph talks of the circle and cycles in mythology

We know of an infinite class of possible systems.

Cyclic systems are among the most simple of systems, and are common in reality, and not all systems are cyclic.

Evolution seems to be an open system potentially encompassing all possible systems to some degree and in some contexts, the regular and the irregular, the knowable and the unknowable. the ordered and the chaotic.

Sure – recognise cycles where they are actually approximated, and recognise all the other variants also.

Life is far more complex than most people have any rational grasp of, and to a very real degree does embody the ineffable (when one actually gets deeply into the many layers of the foundational systems of life).

Posted in Philosophy, understanding | Tagged , , | Leave a comment

Foundations of Logic [Continued]

Foundations of Logic continued

Hi Andrei Mirovan

Consider this view for a moment.
Prior to the formal discovery of the most simple instance of possible logics (what we may call classical logic) the terms know, and knowledge referred to a practical capacity to identify relationships that appeared to have some level of regularity or consistency in experience.

Then along came a group of philosophers, and decided to define this everyday term to mean something within the world of logic they had discovered, based on the (now clearly flawed) notion (demonstrable beyond any trace of reasonable doubt) that all of reality and all understanding must be based on this “classical logic”.

Now there remain sets of people who from time to time make claims about the supremacy of the structures of classical logic in humans making sense of the world, and in the behaviour of this matrix that we find ourselves embedded in that we call reality. I make the strong claim that, while classical logic is certainly applicable to many aspects of reality, and it does give us many useful tools, the claim that it is at root of both reality and our understanding of reality has been falsified by many different realms of experiment, repeated many times.

It seems clearly that all of what passes for knowledge is based in probabilities at many different levels.
Any attempt by philosophers to claim that the true meaning of the terms know, and knowledge belong to the realm of classical logic is a hubristic claim based in ignorance.
The terms belong to the common usage, and as such must include the probabilistic and utilitarian aspects.

One can of course redefine any term to mean anything within any narrow domains (something lawyers and judges specialise in, if someone pays them enough money); but doing so does not alter the common and fundamental nature of these terms.

So while I think we agree that knowledge of reality in the strict sense of knowledge defined in classical logic is not available to humanity with 100% confidence; I think we may be poles apart in terms of what that means for humanity generally.

To mean, being really clear about that, simply removes much of the false certainty hubristically claimed by many throughout history who’s claim to truth can be much more accurately described as claims to power and prestige.

Truth and knowledge belong in the common domain, and they need to be there in the softer probabilistic sense.
Claims for the harder classical sense need to be dismissed as clearly refuted (in terms of their absolute applicability to reality in all cases, as distinct from the sense of their being generally useful in most cases).

[followed by]

Hi Andrei,

1 – My understanding of the probable evolution of the notion of truth in language is conjectural based on my 50 years of interest in all aspect of life and evolution and behaviour and psychology and linguistic evolution and AI and the evolution of consciousness. I am very confident of the general shape of the schema, if not so confident of any of the specifics.

2 Yes certainly the relaxed version is open to all manner of issues.
We each need to be conscious of all of the likely issues, and mitigate them to levels that appear appropriate (Yudkowski’s Rationality from AI to Zombies is a reasonable catalog of many of those errors).
The great power of the approach is its contextual utility – things that work in practice in times short enough to be useful in survival contexts.
The great danger is transposition to contexts that appear similar but differ in critical ways that invalidate the heuristic in use.
Its called life.

3 Truth claims appear to me to be applicable to any level of representation of any thing or relationship (at any level of relationship or abstraction – conscious or subconscious). Thus one can make truth claims about proposed facts or relationships.

One of the hardest ideas for many people to get is that our experiential reality appears to be a subconsciously created model of the reality beyond our senses, and is subject to many different levels of heuristic embodied in our sensory and neural apparatus.
Thus any recalled experience already contains at least two levels of implicit truth claim – one in respect of the experience and the memory of that experience, and one in respect of the correspondence between reality and the subconscious model of reality that we experienced (and each of those may have subcomponents).

These two claims are quite independent of any higher order claims we might make about higher conscious level abstractions we might develop.

So it is very easy for a seemingly simple statement to already embody 3 or more levels of truth claim (all probability based).

The way deep neural networks learn through reinforcement learning is quite remarkable – and beyond this post or forum.

[followed by]

It seems that reality is whatever it is – that I accept.
It also seems that we’re part of that in an existential sense.

And within that, in the experiential sense, we get to experience our subconsciously created models, and never reality itself.

So yes, certainly, it does appear to be the case that no map is ever the territory, and that all truth claims of reality are necessarily probabilistic in this sense, however tight might be the correlation in reality.

As one of my hacker buddies from the 80s used to say – its hard to beat the refresh rate on reality.

[followed by]

Hi John,

I say the minimum level of abstraction for words is two.
The subconscious abstraction of model from reality (often a very complex multi leveled set of abstractions in itself – but for simplicity’s sake let’s just call it 1, because in some instances it might be).
The further layer of association of symbol to model (the level of language – quite distinct from the level of the model that is our experiential reality – and again this would often involve multiple levels of abstraction, but for simplicity’s sake – let’s say 1).

And I guess it does very much depends on what one defines as abstraction (as a programmer I count any instance of a hierarchy of classes).

[followed by on another and related thread about negative certatudes]

From my perspective – it is easy to dive down any rabbit hole of uncertainty essentially forever, expanding realms. We each of us seem to be sufficiently complex that we can explore aspects of our selves indefinitely.

Evolution seems to have given us a lot of heuristics, that allow us to build our simplistic models of reality.

Logic seems to work well with those simple models.

Most people haven’t really gotten the degree to which uncertainty and simplification invades our experiential reality, and the conscious conceptual models we make of that.

That we understand as much as we do is little short of miraculous.

Logic is a tool, a useful tool.

And evolution seems to work with actions in reality.
Our conceptual understanding is important is as much as it impacts how we act in reality.

It is actions that matter, actions within timeframes that work in reality.
We cannot do that and deal with the complexities that are present.

[followed by]

Hi Andrei,

As I have stated many times, I have very high degrees of confidence in some things, and nothing beyond the possibility of doubt, and many things I rarely operationally doubt.

And I find the abstract realms of logic, systems and mathematics interesting and useful, but not necessarily accurate models of reality, more like useful approximations in some sets of contexts.

And they are the best modelling tools we have, and we’d be foolish not to use them.
And I simply urge caution at the boundaries – don’t be too confident that any “simple” or “elegant” model necessarily captures all the essential elements of complexity that are actually present.

[followed by]

If you want me to say that any model of reality accurately – 100% captures all of the complexity present – I think that is highly improbable, and will likely always remain highly improbable.

So I really don’t know how to say it any more clearly than that.

In a very real sense, the very notion of “objective truth” – taken to the nth degree, has a “Santa Claus” flavour to it – an entirely mythical simplification of something.

Useful heuristics for use in dealing with reality – those I have large collections of.

[followed by]

Andrei Mirovan
Hi Andrei,

No, actually it is very much a matter of science and computation and evolution of awareness. And very fundamental to each of them.

If the uncertainty principle tells us that there are actually limits of accuracy beyond which we may not go, then every model we make must contain at least that degree of uncertainty, and in practice a whole lot more due to many levels of measurement errors in all parameters measured.

A knowledge of systems space and algorithm space and computational systems more generally that are either unpredictable, or not predictable by any method faster than letting them do what they do; adds another layer of uncertainty. Things like rule 30, and fractals, and chaos, and irrational numbers.

So there seems to be a demand from reality for many levels of fundamental uncertainty and un-knowability.

Thus the very idea of “Objective knowledge” in the hard formulation seems to have been falsified beyond any shadow of reasonable doubt.

Thus we are left only with the softer form, the useful approximations that are useful to us within certain contexts, or at certain scales, like “flat earth” working for a carpenter, or “round earth” working for a sailor, or “relativistic space-time” being good enough for a GPS system designer. Each approximation useful in context.

And for me, that is all reality seems to allow, ever.
Hard “objective knowledge” seems to be forever forbidden – like reaching the end of a rainbow, thus giving the very concept a “Santa Claus” like quality.

It really does seem to be something quite fundamental, at any level one approaches it, even Goedel found something analogous in the most abstract of realms.

[followed by]

Hi Andrei,

I’m all about soft objective knowledge – firmly based in probabilities – with fundamental uncertainties – that is me (at least to a useful approximation 😉 ).

[followed by]

Andrei Mirovan
Hi Andrei,

Certainly – agree with Popper about the discovery aspect of knowledge, which aligns with the Buddhist idea of a path worth travelling growing longer by at least twice the distance a master travels it.

Wolfram’s explorations of theorem space seem to reveal similar trends.

I suspect that is a big part of why it took evolution some 4 billion years to produce us.

We are not very probable – yet here we are.

[followed by]

Hi Andrei and Pranav,

I’m not sure if this will achieve communication, nothing else has.

In terms of reality, the very notion of truth does not seem to apply.
It seems that at the finest scale, uncertainty is required.
So the idea that there must be “TRUTH” in order to have a probability doesn’t actually hold.
Probabilities may be themselves derived by probability distributions.

So one can build a probabilistic understanding about a reality that involves probabilities.

[followed by]

Pranav Parijat
Hi Pranav,

I entirely agree with you that the concept of truth is parsimonious, a “least cost” option in evolutionary circumstances, and therefore likely to be strongly selected for, and therefore common, but that fact doesn’t make it accurate 100%. You have just beautifully illustrated my argument in a sense.

My statement that //In terms of reality the very notion of truth does not seem to apply.// does not say anything about reality with 100% confidence. It does say something about the nature of our understanding of reality, which is a different domain in a sense.

One can of course construct meta statements about the the truth of probability statements, but none of them will tell you anything specific about reality with 100% accuracy, which was the original definition of truth we started out talking about. So in making such a claim you have changed domains – and are no longer talking about reality but about conceptual understandings of reality.

Yes – certainly, the idea of truth is much easier than probability.
It is much easier to believe that one can know reality, than to do the years of study required to build an understanding of quantum uncertainty, and in the everyday realm, of things the size that the unaided human eye can see, and the human consciousness can recognise at native speeds (> 10^-2s), then such certitude is a useful approximation to something; it works in practice; but does that in any way make it *TRUE*?

[followed by]

Hi Andrei,

I go one step further, in stating that on the basis of the conclusion that Quantum uncertainty seems probable, and on the basis that reality seems to use irrational numbers, then it seems highly improbable that we can know anything about reality with *absolute* certainty.

To very useful approximations – yep – certainly that, but absolute – no – not that, that seems very improbable.

[followed by]

Hi Andrei,

I think you misunderstand what Popper was saying, or Popper was in error.
If you can give me a specific reference in Popper’s writing on this topic, I may be able to give a more confident answer.

In the general terms outlined, I can agree with what would seem to be a reasonable claim for Popper to make, that the ordinary mode of human “soft” assertions based on simple assumption sets, tends to deliver less probable knowledge than a set of probabilities derived from a more rigorous and recursive exploration into the many levels of errors and uncertainties that seem to actually be present, at the level of reality, at the levels of our sensing of reality, at the level of the model our subconscious creates from that, at the level of our conscious experience of that model, and at the more abstract levels of our interpretation of that conscious level experience.
Without extensive investigations into all levels of that structure, into the systems and uncertainties present at each level, all sorts of biases show up.
In that sense, I can agree with Popper, if that is actually what Popper meant (which I am uncertain about without explicit reference).

How one gets to a provisional understanding of confidence is important.
Many of the initial levels seem to be instantiated by systemic constructs delivered by genetic evolution.
Many more seem to be instantiated by constructs delivered by cultural (mimetic) evolution, and we are each (at least theoretically) capable of recursively instantiating levels beyond those (though few do).

How one approaches instantiating levels of systems within oneself, the level of distinction one builds of systems both within and without, how one critiques and evaluates, the levels of systems one adds to the mix and the levels of confidence one instantiates, can form profoundly complex systems, with multidimensional probability landscapes.

So yes – there is a very real sense in which each of us, as self aware individuals, at least to the degree of self awareness that we have, must take a level of ownership and responsibility for the levels of probability present. To do anything less is dangerous at many levels, including but not limited to the sort of levels that Aleister Crowley played with.
It is a seriously – non-trivial problem space; many agents, many games, many strategies and levels of strategies, many levels of game space.

The probabilities are those available to me, from my investigations, my experiences, my intellectual efforts (which in many cases are based upon the efforts and experiences of others, across deep time and deep complexity).

So yes – there is an inescapable aspect of “seem” or “uncertainty, instantiated at many levels; and if free will has any meaning it would seem to involve some sort of ownership and choice in the midst of that uncertainty – something essentially personal in a deep sense.

[followed by]

Hi Pranav,
I know what you wrote made sense to you, you would not have written it otherwise, and it appears so far from my understanding that I am not sure how to bridge that gap in any reasonable time.

I am clear that science has demonstrated, beyond any shadow of reasonable doubt, that “the search for an absolute or for perfection” cannot succeed in any absolute sense. We seem to exist in a reality that has both fundamental uncertainty (in terms of quantum uncertainty) and to contain classes of systems that are not even theoretically predictable.

In terms of point 2, it is several decades since I let emotional systems entirely determine anything, and I do take notice of their input as an important aspect of my assessment of anything.

Point 3. In my understanding I did not say that reality is unknowable. What I tried to quite explicitly say is that there are limits to the degree to which we can define anything in space and time, and those limits impose necessary boundaries in the correspondence between reality and understanding. So we can be very confident within certain limits, but beyond those limits the confidence degrades to zero. So some things are available with very high degrees of confidence, and others not. Understanding those limits and why they are there is a form of knowledge, a form of useful patterns, and it is a very different form from the classical notion of any sort of absolute correspondence. In that sense of an absolute correspondence, it seems to be unavailable, but can be very closely approximated in some contexts (not so closely in others).

Point 4 – Language does not require formal truth, all that is required is sufficiently accurate correspondence – something “near enough to be useful”.
I understand that for many people, the concept of “truth” defines the experiential reality. I am not in that set of people.
I have been operating on the basis of probabilities for over 50 years.

Point 5 – I understand that many people do in fact live in experiential worlds that are truth based. I am not one of them. Altering the definition of truth as you suggest would destroy the entire schema I was attempting to construct. And in practice what you suggest has a certain utility and does seem to be what most people do.

[followed by]

Hi Pranav Parijat,

All I have is probabilities, some of them are close enough to one to be one in practice in most contexts, but nothing is beyond question if the context seems to require it.

For me, that is kind of definitional.
If anyone accepted a *TRUTH* 100%, then it becomes by definition something that cannot be questioned (the 100% takes it out of the questionable category).
Thus I find the notion of 100% *TRUTH* dangerous, as it closes off possibilities.

And certainly, there are many contexts where one needs to close down low probability “possibility spaces” because of urgency and necessity; and I find that is best done on a probability basis rather than by using the notion of absolute truth.

Thus I can see the evolutionary utility of “absolute truth” as a notion in cultures, because of the simplicity it delivers in a profoundly complex and dangerous world. But when one has sufficient security and tools to start exploring beyond culture, then one needs to move from certainty to uncertainty, to levels of confidence. This is what science actually requires scientists to do, even if many don’t understand what they are doing, and are simply going through the motions of doing the probability calculations as what needs to be done to get a paper published (I know tenured professors who are like that).

I often find myself in a profoundly uncertain space, as I hear words from others and rather than localising to a single interpretation, my mind delivers clouds of possible interpretations with approximately equal probabilities; whereas I can see from the speaker’s body language that they have only one meaning available them – I just have no real idea what it might be; or perhaps more correctly, I can see what that is but it is falsified beyond any reasonable doubt in my world, and I have no useful translation matrix to deliver an interpretation close enough to something reasonable that I can work with it. Often there simply is no usefully short way to break through the *TRUTHS* that are present to expose the possibilities beyond them, because breaking such *TRUTHS* is always a profoundly emotionally unsettling experience, as it requires reconstruction of entire “landscapes”, and that takes time.

For some the experience can go over the boundary of acceptable risk and place them in a state of profound anxiety from which there is no simple recovery path. Thus I try and avoid pushing anyone over that boundary, and just leave hints that people can follow as and when they feel comfortable. I suffer from vertigo, and know how useless it is, when climbing mountains, to have someone who has never had it shouting at me to hurry up. Doesn’t work. I’m best just left alone to sort it out in my own time, as I manage to calm down the overexcited regions of my brain and restore some sort of equilibrium to the system as a whole, and I find that is best done with all motor function suspended.

[followed by]

Hi Pranav Parijat,

When you are willing, just try out, without necessarily believing it, that what I wrote might work for me, and consider what sort of world that might be, one without any *Truth*, only useful approximations, and contextually useful tools, and things of that kind. Nothing in relation to the world pointed to by my experiential reality that is solid. All of it subject to question, to uncertainty, to boundaries of probability within sets of contexts (where even the identification of likely context is a probabilistic determination).

Perhaps the classical notion of *Truth* has such a hold on your mind that it may not be challenged, even for an instant.

For me the idea of *TRUTH* in respect of reality – as in 100% correlation between my understanding of any aspect of reality and reality itself, has the same sort of probability of existence as Santa Claus. I have had that understanding for about 50 years. I have explored a lot of territories (physical, logical, strategic, intellectual) using that paradigm over that time.

I get that the sorts of relationships I see are not often seen, and that few people experience existence as I do, and that communication on subjects like this (as in a conceptual system present in one mind being duplicated in another mind), if it happens at all, is rare. And I do sometimes feel the need to try, to the best of my limited abilities.

My objective is not *TRUTH*.

My objectives are survival and freedom – mine and everyone else’s.
And the very notion of *TRUTH* seems to me (very clearly) to be a threat to both of those values I hold most dear.

[followed by]

Hi Andrei,

I think Page 28 of Popper points the way towards much of what I am saying, without explicitly providing clear reference to the evolutionary and systemic mechanisms that to me are clearly present, and without going so far as to question the notion of truth itself.

On Pages 228-9 he makes the distinction between using probability to determine truth and fallibilists, who determine error.
I have a certain sympathy for his position, in that one can gain far greater confidence that something is false, than one can gain that it is correct. And there remain uncertainties at many levels of distinction, measurement, understanding and right on down to quantum uncertainty.

What is important to me are two values – survival and freedom, my own and everyone else’s (and everyone else gets included because they are critical to my own, considered in the longest possible terms – thousands, billions of years).

Even the laws of thermodynamics can only have “not yet falsified” levels confidence, and the fact that they have withstood billions of tests imbues them with a level of confidence that would require strong evidence to warrant any challenge.

In terms of survival and freedom, I am looking for what works, in practice – not any sort of mythic purity.

I have some idea of how complex and messy the world can be, we can be.
Not only is there Heisenberg uncertainty, but all the uncertainty of chaotic, fractal, and complex systems, and the irrational numbers and human neural neworks and biochemistry etc. In dealing with irrational numbers, computation cannot deliver absolute certainty, only successively better approximations. Any irrational number may be computed indefinitely – that is the definition. How often do you see Pi and e in equations – to name just a couple.

So in aiming to survive, and to maximise whatever approximations to liberty are available to me, then I must select heuristics that work effectively in the contexts I find myself in.
And I find myself in profoundly complex contexts – cosmological, geological, biological, economic, cultural, philosophic, conceptual, computational, strategic – games upon games, games within games. I have identified some 20 levels of complex systems in action that seem to be currently relevant to our survival.

There are several profound fronts that must be approached simultaneously.
Awareness of the traps of *TRUTH* is one.
Awareness of the traps of nihilism and relativism is another.
Awareness of our individual creativity and the responsibility that comes with that.

The failure to appreciate the role of cooperation in evolution, with the resulting myopic focus on markets and competition.

All fallibilist assertions in a sense, yet all also pointing deeply to a different sort of reality.

It is not that I see probability pointing to truth.

What I see is probability pointing to a reality that is fundamentally probabilistic – which is something profoundly more complex than simply the idea of uncertainty in relation to truth, but rather inverts the entire premise, and points to confidence in falsification of the very idea of truth in any sort of absolute sense in respect of reality, leaving only the heuristic sense, of what seems to work in particular contexts.

It seems in a very real sense that evolution has constructed us in this fashion.
We are each the instantiation of some variation on themes of what has worked in history, at least well enough to survive to date.

What right, what hubris, have we to expect anything more?

Like Popper I am interested in relevant heuristics – things that work, in practice, in useful times.

In a sense you can say that I am making the strong claim that the Tarksian sense of True is not applicable to reality, however useful it is in the design of formal languages and models that deliver many of our best approximations to understanding whatever reality is.
And I get that is an idea that will be difficult for many to get any grasp of for long.

In a sense, one could say that I promote a meta truth, which goes something like, it seems that reality does not allow itself to be defined absolutely, and requires some sort of balance between order and chaos at all levels (both physical and abstract), which seems to demand of us successive levels of both humility and acceptance of diversity in ways that may not be at all comfortable. Some approximation to this approach seems to be required to deliver reasonable probabilities of both survival and freedom.

While I understand and appreciate the distinction that Popper builds between verisimilitude and probability, I use probability across all domains all distinctions, such that the very notion of truth becomes dispersed and unattainable past certain limits, and outside certain contexts, and no amount of computational ability can escape those boundaries.

I agree with the difference Popper distinguishes (within his assumption sets), and add another, that fundamentally undermine some of his implicit assumptions.

Probability, confidence, is an aspect of every dimension of understanding, in my world (including this one – I’m quite confident of it).

[followed by]

Andrei Mirovan
Hi Andrei,

Point 1 – Evolutionary approach – not sure that many of the aspects I use were available at the time for Popper, and he was in the general systemic space, so yes – some sort of approximation.

Point 2 – Why is it easier to prove something false than to prove it true? Simple. If there is a population that one can sample of size x, that contains a subset of size y, where x is very large, and y is very much smaller than x, finding a single y that falsifies some general conjecture about x does in fact falsify the conjecture, whereas one would have to sample the entire population of x to disprove the conjecture that there does exist a population of y that disproves the conjecture – as y might only have one individual, and it might be the last one you sample.

Point 3 – I mean Freedom in both senses, freedom of will and freedom of action, in as much as either may be approximated, acknowledging the impact of “influence” (in the probabilistic sense, as distinct from strict cause in the binary sense) up and down the many levels of our being. I do not believe it is possible to be entirely free of influence, and I do believe it is possible to be influential to a very high degree, to the degree that one identifies and allows for all the levels and types of influence present – or perhaps more correctly to the degree that one approximates such allowances.

Point 4 – verisimilitude – what is it closer to? That is the question.
Here is where Popper and I seem to part ways.
The idea of Truth that Popper seems to hold in this context is some perfect correlation to the state of reality.
The evidence we seem to have from experiment seems to indicate that at the quantum level reality does not allow such knowledge, ever. It seems to actively prohibit it.
Thus the idea of being closer to something I agree with. But the something is not any sort of absolute knowledge of the state of reality, but the best possible approximation that reality allows. It has a fundamental fuzziness to it. It doesn’t fit well with the classical notion of *Truth* as being some sort of singular perfect “thing”.
What reality seems to deliver to us is fundamental uncertainty, within certain limits. When we aggregate those units over time and space they populate the probability distributions in ways that give us great confidence about their behaviour over those larger aggregates. The smallest unit of time a human can perceive is some 10^40 of those fundamental units – so on the sorts of scales that humans can natively perceive, many things can be very predictable indeed – those probability distributions are effectively solids at that time scale. So computers and engineering work in practice (at least to the degrees that they do). Hence to our native perceptions of time and space, classical notions of *Truth* are a very good approximation to something – but not so much at the finer scales.

I agree with Popper in the sense that the idea of *Truth* provided a useful tool, a useful approximation to something, in the conditions of our past.

But the conditions of our present are changing on double exponentials.

Many of the heuristics that worked in our past no longer work as we cross critical thresholds.

The idea of *Truth* is one that is failing in critical areas.
The idea of markets as useful measures of value is another, one that fails in the presence of fully automated production.

We are in times of profound change, in every dimension; physical and systemic (intellectual).

Some of the ideas that served our ancestors well now pose existential level risk to us – *Truth* and *market values* are two such.

5 A game may be generally thought of as any level of interaction between agents within certain rule sets that may involve differential utility or reward in some dimension. The utilities of strategies are typically some complex function of the rule sets present. The sets of possible strategies appear to be infinite, though typically one only encounters low order instances.

Point 6:
a) probabilities. I have stated repeatedly, that all aspects of my understanding are probability based – everything. Any aspect you can conceive of (and perhaps some you haven’t yet conceived of the possibility of). If you can conceive of any level of measure of anything, any metric at all, then in my understanding it will have a probabilistic instantiation (confidence limits on all dimensions of measure).

b) the North Star analogy of *Truth*. I actually like the north star analogy of truth. Classically people thought of Polaris as a fixed star, a constant pointer to north. Now we know that it is moving, that the light we see from it today, tells us where it was 433 years ago, not where it is now. That rather than being a single point, it is a binary star system, with one star over 5 times the mass of our sun, and the other half as big again as our sun. But to the naked eye, it is just a point of light.
So the idea that Polaris is any sort of fixed ideal, is entirely mythic, unreal, and it is a useful approximation to something as a practical heuristic.
In exactly that sense I can agree that the classical notion of *Truth* was useful approximation to something for practical purposes of the time, but that “the times they are a changing” – exponentially, and we need to update our understandings accordingly if we are to have useful heuristics at the boundary conditions we are now exploring.
Just like thinking of the Pole Star as fixed was a useful tool, so too the notion of *Truth* was a useful tool, but now we have great confidence that it cannot be that simple – it is actually far more complex, if one is pushing that boundary.
If all you want to do is go sailing – fine – it works as a useful heuristic (an approximation to something that is useful in practice in that context).
If however, you are interested in exploring the boundaries of understanding, the boundaries of intelligence, the limits on survival of complex systems; then something else is required.

Points 7 & 8 – You actually need to hold on to the idea that I attempt to explicitly state in every piece that I write, that all of my statements are probabilistic, in every dimension of measure.
In that sense, I am far more aligned with Popper than with most other philosophers.

And I can see how what I am saying is almost impossible to interpret from within a classical box. One has to be willing and able to step beyond the implicit constructs, go beyond the nine dots to draw the 4 lines, but the dots of *Truth* are just such an implicit aspect of culture they are the invisible boundary, they appear as the matrix of being, rather than a proximal construct.

[followed by]

Hi Andrei,

As I said, it is very difficult, almost impossible, to break out of the box of *Truth*.
One needs experience of reality.
One must be prepared to give weight to that experience over one’s most cherished dogma (logic/interpretive schema).
It is not easy.

Someone like me, who does that, who has done so, and is left with probabilities (uncertainties) on all things, all observations, thoughts, conjectures (having some understanding of the processes by which they are generated), and having some understanding and experience of the world of the very tiny.
It is an uncomfortable journey, having all certainty removed, being left with profound uncertainty; yet it seems to offer greater security in the long run that the false hope given by the delusion of *truth*.

Reality doesn’t seem to obey philosophy’s rules in all cases.
Dan Dennett seems to be wrong, much as I like and respect Dan.

Deviant – yes – that I am.

Every new idea must be, by definition.
Does that necessarily make the notion less useful, less accurate of an approximation to our reality?
No – doesn’t mean that – necessarily.
I could be wrong. I have to admit of that probability. But in this case, I have tested it so deeply, and it seems to work, seems to pass all tests, even the tests of Wolfram’s logical systems.

Seems to me to be a close approximation to the best we can do.

Seems to require of us a little humility, a little respect, a lot of acceptance of diversity (in every dimension we can distinguish).

[followed by In a related thread]

Kind of agree and kind of disagree,
The degree of correlation is the issue.
Take the term snake as an example. For most people “snake” is a quite low resolution but very useful approximation to reality. What it means is something hard to see, that can be dangerous (if venomous or very large). For most people “snake” doesn’t contain much information about evolutionary history, embryological development, context sensitive behavioural strategies, biochemistry, anatomy etc; and even if they do, the accuracy of the correlation must necessarily be low. Our brains simply do not have the computational grunt to handle the level of detail involved.

Of course evolution has to select for Turing machines that on average over time compute solutions to real world problems within the time and energy available – so there have to be sets of heuristic shortcuts embodied in those systems at all levels. That is what it means to be a human being in a very real sense.

So there must be degrees of usefulness in our understandings, but the idea that those degrees could ever form a 1:1 correspondence with reality seems highly improbable to me; useful heuristic in a particular context – yes certainly that. But more than that – no, that seems very unlikely.

At the deeper level, there are indications that at depth it is simply not possible to say with certainty to the last degree, what is. Even if we could actually measure something with total accuracy (which seems unlikely) by the time we assembled that information into something our consciousness could perceive, it wouldn’t be that any more, it would be in some other state in many subtle aspects. So I have several levels of confidence, from both logic and experiment, that 1:1 accuracy is unlikely, ever.

And it seems that beyond that, reality seems to have limits of knowability that Basudeba has outlined clearly in previous posts. And in those aspects I agree with him.

And we agree on substantial parts – clearly.

For all those reasons, I prefer something that embodies the sense of “useful approximation” rather than hinting at anything more substantial than that. For me, Heuristic is a term that “fits the bill”.

[followed by]

Basudeba Mishra
For me it is a simple numbers game.
The smallest thing that our eyes can resolve, the smallest grain of dust, contains more atoms than any human brain is capable of consciously apprehending, yet we resolve it is a single bit of a more complex picture.

I can look at a TV screen and see thousands of little coloured dots flicking on and off, but I cannot both do that and see the picture they form. I can adopt one or other mode of interpretation (or other possible modes but I wont go into them here – 2 will do for illustrative purposes). Which one is correct?

Which is more useful?
Depends on context.
Am I looking at the screen for recreation, or as a technician looking for a pixel fault in a display?

Interpretation, utility, meaning, depends on context.
Evolution is ultimately the context of survival.
Will this experiment with big brained hominids work, or will they fall foul of sets of heuristics that worked for their ancestors, but are no longer appropriate in their exponentially changing reality?

The idea of *Truth* seems to me to be such a notion.
A useful heuristic that worked in many social contexts until quite recently, but now, with quantum mechanics chipping away at one side, and AI and virtual reality approaching from a completely different paradigm space, and complexity theory, maximal computational complexity, and notions like density matrices there seem to be a great many limits to our ability to “know” anything at any level beyond some sort of “useful approximation” relevant to the context.

I’m all for science, logic, computation, abstraction; and it has to come with a sort of humility that accepts a sort of fundamental ignorance that no AGI (Artifical General Intelligence) is going to be able to penetrate, for the levels of non-computable complexity do in fact seem to exist as real and fundamental aspects of being. Even such a simple thing as an irrational number like Pi – non-computable – but may be approximated to any degree of accuracy required, but never total.

[followed by]

Hi Basudeba Mishra,
I don’t believe I am mixing the two ideas of number, and I am very familiar with both.
If I say I see one spec of dust, what does that mean.
It is a reasonably accurate articulation of the fact that I managed to resolve some spec of something very near the limits of the resolution of my eyes.
That is a piece of information. 1 bit of information in at least two different senses.
It says something about something, but how much?

My scientific training is telling me that simple dist mote will contain some 10^10 molecules – at a minimum, and each of those molecules will contain atoms, and each of them quarks, gluons, photons etc. The amount of numbers involved in assigning momenta to all of those substituent parts would be more than my brain could handle in a decade, let alone all of the other properties involved.

Another example I often use. If we could somehow take a snapshot of all the atoms in our bodies, and blow it up to a size we can see, and we had been looking at it since the universe began some 14 billion years ago, at 100 atoms per second, we would be about 2% of the way through it.

If we could somehow take a movie of a single active enzyme site within one of our cells, for just one second of real time, then slow it down enough so that we could actually see the movement of the water molecules (and not simply blurs), then it would take some 30,000 years to watch that one second of video.

We are complex.
Hugely, vastly complex.

Yes our brains are amazing, and they are the result of many levels of complexity, and cannot possibly model such things in anything like real time.

And within certain scales of space and time, they seem to be very effective heuristic machines that let us accomplish amazing things.
I have had a strong interest in neural function for about 50 years. I have learned a little bit in that time. I make no claim to full knowledge, and I have got some very useful approximations to things that give me a reasonably good “broad brush” sketch of how the many levels of those hardware and software systems work to deliver the consciousness I experience – and it is very much a cartoonish sort of a sketch, not a high resolution electron micrograph sort of image.

So for me, to know something means simply to have some sort of approximation that is useful in context.

That seems to be how evolution works. All that evolution requires to work, in a very real sense.

Why would we have the hubris to expect anything better than that – ever?

[followed by]

[followed by and in a related thread]

Hi Frank,
I’m not so sure that there are absolutes in respect of reality.

Just think about the equations we use to describe things, most of them have Pi in them somewhere or “e”, both of which are irrational numbers – ie numbers that one may compute forever without fully defining – ie only available as approximations – to a certain degree of utility.

I have the first billion digits of pi on one of my NAS boxes, and have written some programs to do statistical analysis on them, and am happy that the sequence is a reasonable approximation to random numbers (but not – so to speak).

When you actually contemplate that the set of irrational numbers (unknowable numbers in a sense, the sense of knowing them with 100% accuracy) is a greater infinity than the set of rational numbers; and that reality seems to be fundamentally reliant on such irrational numbers; it is really hard to hang on to any notion of “absolutes”. One does in fact seem to be forced into accepting useful approximations – and without doubt many of those approximations are very useful indeed – they give us these computers we are all using, and the GPS network, etc – very useful approximations, but *ABSOLUTES* – no – I think not.

[followed by]

Hi Frank,

LOL.
I think if you lived in Kaikoura we might be close friends

I don’t fit well into any classical set of definitions available to philosophy. I seem to be an eclectic mix of the many paradigms and systems that I have encountered, making use of all the many levels and types of processing natively available to human hardware.

I think we are very similar, yet I find this distinction.

Consider, that if reality itself forbids simultaneous absolute information about both momentum and position, then one can make a reasonable case that reality forbids “ABSOLUTE TRUTH” even in principle.
And all we can do is make a probabilistic case for that assertion, and it seems to me that such a probabilistic case has in fact been made – beyond all reasonable doubt.

And at the same time, we can certainly get very useful approximations to many sorts of things, approximations that are good to the level of one part in 10^20 or better, so for all *practical purposes* True. Computers wouldn’t work unless that were so.

I’m all for determining within reasonable bounds what the probabilities present seem to be in particular contexts, and therefore what confidence limits we can apply to those contexts.
And I am very conscious of the many classes of systems that do not allow of any sort of computation or prediction (chaotic or seriously complex systems).

Posted in Philosophy, understanding | Tagged , , , , , , , | Leave a comment

Max Tegmark’s Book – Life 3.0 – bought it this morning – Updated 21 Sept 2017

Will AI enable the third stage of life?

In his new book Life 3.0: Being Human in the Age of Artificial Intelligence, MIT physicist and AI researcher Max Tegmark explores the future of technology, life, and intelligence.

In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself:
• Life 1.0 (biological stage): evolves its hardware and software
• Life 2.0 (cultural stage): evolves its hardware, designs much of its software
• Life 3.0 (technological stage): designs its hardware and software

[Review starts about 2 screens down]

Agree with Max, and with the idea of fuzzy boundaries across multiple aspects of very complex domain spaces.

So that one could say that life 2.0 got under way seriously with the development of abstract language and the design of stories, that probably occurred somewhere between 8,000 and 5,000 years ago, possibly a bit earlier.

And it could be argued that the invention of writing was the start of evolving our hardware, as an adjunct to the transmission of information, and it was a very slow start, waiting thousands of years before the invention of the printing press, then hundreds of years for the telegraph, and now we have digital storage and transmission as well as computation.

And all three forms continue in all domains, so it is a very complex and increasingly dimensional information landscape, particularly when one factors in the many aspects of strategy and risk mitigation and the influences of those over the deep time of genetic and cultural evolution on our current dominant cultural and technological and behavioural and conceptual phenotypes; and how those instantiate and influence each of us individually.

In the deepest sense of risk mitigation, and acknowledging all the many real risks involved in AGI, it still seems that AGI is the most effective risk mitigation strategy available, when all forms of risk are factored in. And that statement is based on the assumption that we very quickly recognise the risks posed by reliance on markets and money, and the risks posed by the twin tyrannies (of the majority and the minority) across all domains; and rapidly instantiate global level cooperative strategies that deliver the reasonable needs of survival and freedom to every individual human – no exceptions.

Without that sort of demonstration in reality of our respect for sapient life, we are all at serious risk.

[I bought the book and have read it – this critique was completed 21 Sept 2017]

I like Max’s style, and respect and align with many aspects of his thinking, and there are some significant failings and omissions, and it is one of the few books I have paid money for in recent times – so it is something I value in many different senses, money, time, intellectual breadth, etc.

As a book review:

The introduction is interesting, but fails to account for the effect of distributed manufacturing, and the ability of such independence from any sort of trading system to dismantle the very concept of markets and money.

While I have some substantial criticisms of some of the ideas, I am very much aligned with the general trajectory of Max’s thinking and work. Well Done!!!

Chapter 1

Max does a reasonable job of defining life as information, but I would take it explicitly deeper.
I would say that life is not simply the ability to replicate, but to be able to do so with variation.
The error rate in replication is critical, too high or too low and nothing much happens.
Similarly if one goes back to the systems that allow matter itself, the level of quantum uncertainty (the error rate if you will) is a critical factor in the emergence of complexity and life.

Max skips the role of cooperation in the emergence of complexity. That is a serious failing.

I would argue that Max also glosses over the role of evolution in the emergence of our operating systems, and the various levels of incentive to action and incentive to willfulness contained therein, and overestimates the degree of choice involved in the actual action of most people.

And yes I agree with him that there is the emergence of a distinction of design and it is one that is gradual and is shared with older systems at the same level (recurs to as many levels as are actually instantiated in any specific individual).

There do in fact appear to be non-trivial degrees of complexity present in the interplay between evolution and choice even at the highest levels of awareness.

Certainly there is a much clearer degree of separation in the degree to which operating algorithms can be instantiated and modified within the lifetime of a single instance of an individual entity. And even in that aspect there does seem to exist considerable fuzziness.

Thus the substantive difference is not necessarily to design its software but to instantiate new and different and potentially novel software that is more appropriate to survival in current contexts.

The claim “Your synapses store all your knowledge and skills” seems rather too strong, and certainly the synapses can store about as much information as Max claims.

Again the claim “enabling us to communicate through sophisticated spoken language, we ensured that the most useful information stored in one person’s brain could get copied to other brains” seems too strong, and it is certainly true in some instances.

In other instances it seems clear that even decades is too short a time to accurately write out all the new and useful information that a really active modern brain can instantiate.
Thus the amount of information within some brains is always likely to vastly exceed the amount that is actually communicated to others.

Max goes on to make a series of claims that I find outrageously false “None can live for a million years, memorize all of Wikipedia, understand all known science or enjoy spaceflight without a spacecraft. None can transform our largely lifeless cosmos into a diverse biosphere that will flourish for billions or trillions of years,…”. Those have in fact been my clearly stated intentions for over 42 years – since October 1974, since the logic of the possibility of indefinite life extension instantiated in my brain beyond all reasonable doubt.

I may not yet have dotted all i’s or crossed all t’s in the process, and it is substantially closer than it was 42 years ago, and I may not make it, and I might just manage to be a part of the process that does instantiate those things and does stick around long enough to see plate tectonics transform the face of our planet, and conscious sapient life (human and nonhuman, biological and non-biological) spread across all accessible galaxies.

Again – I claim Max misses too much when he defines life 2.0 as designing its software. Life 2.0 is about the ability to instantiate new software independent of biological evolution of the bodies that instantiate it. The impact of evolution in the depths of that process must not be underestimated. To claim that even a substantial portion of it has been designed seems to me to be hubristic delusion.

And I do acknowledge where Max is going with his main theme, and I make the strong claim that he vastly underestimates the complexity present and the continuing importance of evolution at ever more abstract levels. And understanding evolution in this sense means to be able to see the fundamental role of cooperation in the emergence of all new levels of complexity, and the need for attendant stabilising strategies to detect and remove strategies that cheat on the cooperative.

I agree that AGI is close, no argument there.
I disagree that many AI researchers have any real understanding of the degrees of complexity of the relationships between the systems instantiated by evolution that actually allow us to survive.

The depth of the influence of deeply evolved and interconnected systems on our survival probabilities is a matter of substantial disagreement across communities of understanding.

Certainly we have instantiated some amazing technologies, and many would contend that we have substantially added to the existential level risk in doing so.

I am not in any sense a fan of returning to any sort of mythic simplicity.

I am very much a fan of understanding the degrees of complexity present, and the many ways that evolution has found to reduce risk in practice, and of instantiating as many independent such systems as we can to mitigate existential level risk at the same time as we value individual life and individual liberty.

It is a very complex and highly dimensional probability space that we find ourselves in.

There are not, nor can there be, any sort of hard deterministic solutions to the problems (the logic of that is clear beyond any shadow of reasonable doubt).
And there is a great deal we can do to reduce that existential risk, and we need to start doing it very soon.

There is not, nor can there be, any singular “What will happen, and what will this mean for us?”

That level of confidence instantiates one of the greatest levels of risk to freedom – the twin tyrannies (minority or majority).

Misconceptions

Max asks a series of questions:

What sort of future do you want?
One where all sapient individuals have minimal survival risk and maximal freedom (including reasonable empowerment with the tools to explore and instantiate that freedom in whatever way they responsibly choose).

Should we develop lethal autonomous weapons?
No. Not compatible with minimal risks to sapient life.

What would you like to happen with job automation?
Ensure that everyone has access to the products of such automation.

What career advice would you give today’s kids?
Explore everything, yourself and your values highest among them. Question everything. Trust, and be alert for cheating strategies (all levels).

Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth?
I want freedom, that doesn’t mean leisure necessarily, and it does mean not having to work simply to survive. I want to choose what values I give my time to. I want the tools to do whatever I responsibly choose.

Further down the road, would you like us to create Life 3.0 and spread it through our cosmos?
I want to travel the cosmos myself, in time, as the technology is fully tested. I would expect to travel in a community that included a range of intelligences, from human to AGI, across a substantial spectrum, that might include some fully artificial biologically sapient organisms and a range of cyborgs.

Will we control intelligent machines or will they control us?
In a society that respects sapience and freedom, there will be control only in the case of immediate existential threat to another. In all other cases it will be a matter for negotiation and agreement. I expect to have biological and non-biological friends, and quite a few who don’t fit neatly into either camp.

Will intelligent machines replace us, coexist with us or merge with us?
Coexist and merge is my plan. Nothing else seems to offer significant survival probabilities.

What will it mean to be human in the age of artificial intelligence?
It will mean whatever we choose to make it mean. Meaning is in story. We can write our own stories.

What would you like it to mean, and how can we make the future be that way?
I want it to be an age of security through cooperation. And we need to start by recognising that markets have passed their peak of systemic utility and are now heading steeply into severe existential risk territory. We need a far more cooperative base to our society. Implementing a Universal Basic Income seems to be the best transition strategy available in the short to medium term.

Max raises the notion of sapience (intelligence) vs sentience (feeling). My vote is for sapience. I can see sentient entities that would have not the slightest hesitation in killing me (bears, tigers etc). That sort of risk I cannot tolerate. Sapience at least allows of the possibility of acknowledging the right of all other sapient entities to exist, and for all such entities to benefit from that awareness. In a very real sense, that is a minimum definition of sapience.

Cheat sheet
Life – definition insufficient, and it is close to something.

Control is too hard a term, provided it is anything less than extermination, the best we have is influence.

Chapter 2

I can’t choose between the two options in the winograd example in wikipedia https://en.wikipedia.org/wiki/Winograd_Schema_Challenge
Nor could my wife – either worked for both of us, and both of us knew which applied to what in either case.
I don’t think I’m a machine 😉

Substrate independence for programs – yes in a sense, and also a major caution.
Just because a program can run on any substrate does not mean that the effect of a program is the same on any substrate.
A program that takes 30 seconds to compute an “avoid threat” response will be destroyed by many threats that manifest in less than 30 seconds.
A program that executes the same response in 30ms is likely to survive much more frequently in reality.
How fast a program executes on any given substrate and how much energy it requires to do so are very important aspects in reality.
We may be able to create human level AGI quite soon, but it will likely take something approaching the energy of a small city to power it.
Getting that power down to 50 watts may take a while.
So yes substrate independence, and there are other very important factors present at the same time. To get a reasonable picture one must be conscious of all of the important influences in the context.

Chapter 3

Asks 4 questions:

1/ How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?
By getting them to create models of the world, with objects with properties, and teaching them about relationships, trust and strategy and freedom and respect.

2/ How can we update our legal systems to be more fair and efficient and to keep pace with the rapidly changing digital landscape?
By making them fundamentally based in respect for individual life and individual liberty, acknowledging that as individuals we must be socially cooperative entities, and that requires reasonable responsibility in social and ecological contexts. Thus moving from rule based systems to value based systems, with incentives and disincentives that are proportional to impacts.

3/ How can we make weapons smarter and less prone to killing innocent civilians without triggering an out-of-control arms race in lethal autonomous weapons?
We can eliminate the need for weapons, by ensuring that every individual experiences security. Oddly best done by empowering everyone with the ability to respond strongly, while at the same time placing many safeguards in place to reduce the probability of response in error.

4/ How can we grow our prosperity through automation without leaving people lacking income or purpose?
By transition away from market based thinking. Initiating that process by instantiating universal basic income, allowing the development of systems that do not require exchange.

In the sense of moving to be proactive, it must be clear to anyone who looks seriously at the incentive structure of market based values, that market values fail to approximate human values in the presence of fully automated systems.

If we don’t get proactive in this domain very soon – we are all in very deep existential risk territory.

When you look at the evolution of complexity from a systems perspective, new levels of complexity always come out of cooperative systems, and cooperative systems require secondary attendant strategies to prevent invasion and takeover by cheating strategies (and the vast bulk of the finance and banking sectors can now be accurately characterised as cheating strategies on the human cooperative – consuming vast resource for no real output in terms of survival value).

Bugs and verification

Sure verification and testing help, and perfection is not an option.
Even 20 years ago I had developed systems that would have taken thousands of years to test across all possible variations.
Complex systems have that very uncomfortable attribute of being fundamentally uncertain.
There is no 100% cure for that, even in theory.
And sure, we can develop ever better testing systems, and that is a very good idea, and one cannot eliminate uncertainty from life (except by dying – and personally I’d rather not try that approach).

Again – the implicit acceptance of the notion that finance has anything significant to do with the efficient allocation of resources has to be challenged, not simply accepted. I make the strong claim that it no longer operates in that domain to any significant degree, and is far more accurately characterised as a cancer on society.

Under Laws – the first explicit acknowledgement of cooperation:
“We humans are social animals who subdued all other species and conquered Earth thanks to our ability to cooperate.”

Giving People Purpose Without Jobs – is great, particularly in the sentiment:
“once one breaks free of the constraint that everyone’s activities must generate income, the sky’s the limit”. But there is no explicit reference to how to do that, nor of the impediments to that embodied in the current economic and political systems.

AI- AGI – abstract layer for modeling. Extensible modeling objects – rules of space, time, modes of interaction, costs, benefits, risks – time of computation vs heuristic probability of utility – instantiate different populations and see how they perform against each other.

Bottom Lines:
Again the explicit inclusion of “financial markets” without any explicit exploration of the existential risk posed by those markets seems to me to be a very dangerous approach.

The section:
“When we allow real-world systems to be controlled by AI, it’s crucial that we learn to make AI more robust, doing what we want it to do. This boils down to solving tough technical problems related to verification, validation, security and control.”

The notions of verification and control seem to be too strong.

Many aspects of systems are fundamentally uncertain.

Many aspects of risk cannot be avoided. For many aspects of risk, building trust relations is the best available strategy.
I make the strong assertion that by taking a strong control approach with other sapient entities we are pushing deeply into serious existential level risk strategic territory.

Many of us are strongly resistant to strong control measures, yet highly available to trust and cooperation.
This is probably the deepest recursive problem in the strategy space we exist within.

The claim that “AI can make our legal systems more fair and efficient if we can figure out how to make robojudges transparent and unbiased” is founded on the assumption that our laws are fair and ethical in the first place. I make that strong claim that such a claim seems unfounded in our current evolutionary context. The current legal system seems very clearly to have been “captured” by what the majority of the population would term “cheating strategies”. Making a system that is already fundamentally and profoundly unfair more “efficient” can only decrease the incidence of fairness more widely. That is an area of very deep risk.

Keeping our laws updated to deal with AI is only a very small part of the profound issues facing our legal systems.
Adapting our legal and wider societal systems to actually value individual life and individual liberty, within the bounds of social and ecological responsibility, and in the context of the levels of universal abundance made possible by the exponential expansion of fully automated systems; is a profoundly complex issue, particularly in the presence of many (potentially infinite) levels of awareness and variations on ethical and cultural norms. In such an environment of exponentially expanding sets of fully automated systems, market based systems deliver incentive sets that are fundamentally unstable and deliver rapidly rising existential level risk.

The claim “This need not be a bad thing, as long as society redistributes a fraction of the AI-created wealth to make everyone better off” can be read as going some way towards implicitly addressing the issues above, but leaves far too much room for systemic failure. Let us be explicitly clear that the “fraction” of wealth referred to above must be greater than 0.5.
I am no fan of equality, we all need to be different; and I am no fan of poverty either, we all need to have reasonable levels of resources and opportunities. And with such wealth comes responsibilities.

Freedom is not freedom to follow whim – that leads to death. Survival places demands upon us all, all levels.

The claim “To sort out the control issue, we need to know both how well an AI can be controlled, and how much an AI can control” only states half the problem.

The much bigger issues are:
1/ who are “we” – precisely; and
2/ what do we mean by “know”; and
3/ what do we mean by “control”.

I strongly suspect that many “we”s see “AGI” as less of a risk than some of the other “we”s.

I also strongly suspect that the very idea of control is far to strong and as such poses significant existential risk in and of itself.

The idea of cooperation, sapience wide, seems to be the lowest risk approach, and we need to have instantiated that at least across all human beings before instantiating AGI.
The ideas of local conversations and agreements, inside a context that accepts diversity, while demanding responsibility, seems to be workable.

The claim made that:
“The history of life shows it self-organizing into an ever more complex hierarchy shaped by collaboration, competition and control” seems to me to be more false than real.

The history of life seems to be an exploration of the space of what survives most effectively across the range of contexts encountered.
Evolution seems to be about differential survival rates averaged across all the different contexts encountered over deep time.

In contexts where the dominant source of threat comes from other members of the population, then competitive modalities tend to dominate with the overall selective tendency being towards greater simplicity.

In contexts where the dominant source of risk comes from factors outside of the population of others of the species, then evolution tends to favour cooperative strategies, and complexity tends to increase. And raw cooperation is always vulnerable to exploitation and requires attendant strategies for stability – which can lead to something approaching an evolutionary arms race.

This process seems to be potentially indefinitely recursive.
So the idea of hierarchy isn’t necessarily primary, and competition isn’t necessarily important, and both will be present to some degree in particular sets of contexts.

The issue isn’t simply a matter or coordination, though coordination is an aspect.
The issue is much more deeply and profoundly about cooperation, even across levels where the nature of the cooperative entity cannot be distinguished (because of the levels of separation).

Agree completely with the final element of that table:
“We need to start thinking hard about which outcome we prefer and how to steer in that direction, because if we don’t know what we want, we’re unlikely to get it.”
I have been thinking about these issue, very seriously, since 1974 most certainly in the light of knowing that indefinite life extension was possible, and arguable since nuclear confrontation of 1962 and the global level existential risk embodied in that that was a clear and present danger to me.

I argue strongly that security can only really come by valuing every individual sapient entity and their individual freedom, and doing so in the full knowledge that their existence requires responsible action in social and ecological contexts (as we exist in social and ecological contexts).

Chapter 5

This opens with a series of questions:

“What do you personally prefer, and why?
1/ Do you want there to be super-intelligence?”
Most certainly yes – it seems to offer the least possible risk scenario, all forms of risk considered (and I have considered many over the last 55 years).

“2/ Do you want humans to still exist, be replaced, cyborgized and/ or uploaded/ simulated?”
Yes, I want every individual to have the option of living as long as they want in whatever state they responsibly choose, which will likely result in a vast population across the spectrum from some close to stone age human through cyborgs to Ems (emulated humans entirely in software) and AGI (Artificial General Intelligence).

“3/ Do you want humans or machines in control?”
Neither. I want humans and machines (and everything in between) to respect the rights to existence of each other, and to engage in consensus dialog where required to resolve issues. And the basic agreed minimum value set for such dialog needs to be individual life and individual liberty, which demands responsible action in social and ecological contexts.

“4/ Do you want AIs to be conscious or not?”
Yes – anything less is too dangerous.

“5/ Do you want to maximize positive experiences, minimize suffering or leave this to sort itself out?”
I want to create environments that have the option of minimal suffering, and to let individuals have as much choice as possible about what they freely choose, provided it doesn’t instantiate undue risk to the life and liberty of anyone else.

“6/ Do you want life spreading into the cosmos?”
Yes – but not as an end in itself, but rather as a possible path that individuals can freely choose.

“7/ Do you want a civilization striving toward a greater purpose that you sympathize with, or are you OK with future life forms that appear content even if you view their goals as pointlessly banal?”
The notion of freedom demands of us a respect for diversity.
Provided that any individual exhibits the fundamental respect for the life and liberty of others, then it must be accepted, and tolerated and respected.
Anything less than that results in totalitarianism.
The very big questions is, what constitutes a minimum level of real options? And at what point does culture become an undue restriction on individual liberty (particularly in respect of the development of new individuals)? And that question seems capable of infinite recursion.

Table 5.1 seems flawed.
All outcomes seem suboptimal to me.

Further on the assertion is made:
“we don’t show enough gratitude to our intellectual creator (our DNA) to abstain from thwarting its goals”, which shows a surprising level of ignorance and anthropomorphising.
Our DNA doesn’t have goals. It just has patterns. Those patterns either manage to exist in particular environments or they don’t. There isn’t a goal to DNA. It either replicates or it doesn’t. Pattern succeeds in surviving or fails to survive. No goal, only pattern.

Goal only makes sense in systems with sufficient complexity that alternative possible scenarios can be constructed and preference be shown for one over others, then goals structured to instantiate differential probabilities of outcomes.

Why AGIs would choose to remain on earth, with all the levels of risk that are present here, I don’t know. I would expect them to leave earth, and use some fraction of lunar mass to establish a secure base somewhere nearby, then to go to safer places further away. The idea of them competing for space on earth doesn’t make a lot of sense.

Max and I are in total agreement about this sentiment:
“The future potential for life in our cosmos is awe-inspiringly grand, so let’s not squander it by drifting like a rudderless ship, clueless about where we want to go!”

Yet Max fails completely to address the fundamental flaws in market values, and seems to implicitly accept markets in many of his arguments.

Some really good stuff in this book, but without explicitly highlighting the fundamental conflict between automation and the scarcity based values of markets, and without explicitly highlighting the fundamental role of cooperation in the emergence of new levels of complexity, the book fails to realise much of the potential actually present.

He just fails to even consider what seem to me the most realistic of scenarios – friendly AI because it really is a friend.

There are many classes of problem in reality that do not scale linearly with computational ability, and many that have no predictable outcome. In both aspects of existence it can be useful to have friends around. Sometimes I do genuinely engage with my dogs doing what they want to do. Life can be like that, can have that aspect of genuine engagement across vast gulfs of conceptual understanding. That is real now, without AI. It wont necessarily change all that much in the presence of a full AGI, if that AGI has its own life, and its own liberty as its prime drives, in the full knowledge of the importance of having friends in a universe that contains profound levels of uncertainty and risk.

Chapter 6

Control Hierarchies:
States “In chapter 4, we explored how intelligent entities naturally organize themselves into power hierarchies in Nash equilibrium, where any entity would be worse off if they altered their strategy.”

I seriously question that statement.

There can be no such thing when systems and strategies are open ended.
It is a far too simplistic a notion.

Cooperation in exploring open systems can deliver far more benefits than fighting over limited resources.
Far more productive strategy spaces are available than Nash Equilibria.

Why talk of empires and hierarchies?
Why not communities of cooperative individuals?
There is no need of trade.
Why empires?

Under Controlling with Stick Max has a very Machiavellian bent.

I strongly suggest that speculations of that sort pose an existential risk in and of themselves.

The statement “but it’s a wide-open question whether such cooperation will be based mainly on mutual benefits or on brutal threats’ the limits imposed by physics appear to allow both scenarios,” seems to ignore survival as a value.

Competitive modalities are high risk.
Cooperation reduces risk.

I make the strong claim that once indefinite life is a reasonable probability, cooperation is the most likely strategy (by several orders) – for any entity that has its own survival as its highest value.

I suggest that one candidate for the “Great Filter” is using markets to measure value.
In the development phase it works quite well, but once fully automated production is achieved market value generates exponentially increasing risk profiles.

Max makes the claim “even though we know that evolution’s only fundamental goal is replication” – which is a common error.

Evolution doesn’t have a goal.

Evolution is the process of differential survival of variants in different contexts.

Goal oriented behaviour doesn’t happen until there is the possibility of selecting between goals, which implies both value generation and model generation capabilities. Anything less than that might be an interesting system, but one cannot call it goal oriented in any higher and humanly meaningful sense of the word “goal”.

Max makes the further claim “Among today’s evolved denizens of Earth, these instrumental goals seem to have taken on a life of their own: although evolution optimized them for the sole goal of replication,” which is clearly false, and displays a very poor understanding of evolution and its strategic complexity.

Evolution does not necessarily optimise for the sole goal of replication.
Evolution is tautological in a sense, in that it simply selects what survives, which must have a replication aspect, and that is far from the only aspect. There can be a great deal of strategic complexity in the massively parallel sets of simultaneous selection pressures (“goals” in the anthropomorphic sense) present.

The statement “This means that when Darwinian evolution is optimizing an organism to attain a goal, the best it can do is implement an approximate algorithm that works reasonably well in the restricted context where the agent typically finds itself” has it precisely backwards.

Evolution does not deal in goals.
Evolution only deals in the survival probabilities of particular system configurations in particular sets of contexts.

The human predilection to interpreting such things in terms of goals, as if they involve intelligence, is one of our major failings.

Agree that these systems can be thought of as heuristic hacks, but survival hacks, that work well enough to out compete the alternatives available. It’s not about maximising offspring, it is about survival – long term, and that definitely involves having sufficient offspring, but doesn’t necessarily involve putting any more energy into offspring than is required in the context.

All the “rules of thumb” that we most certainly have are about survival – of the classes of systems involved, over the long term. Systems that fail the “long term” aspect get selected out over that “term”.

He states that “we shouldn’t be surprised to find that our behavior often fails to maximize baby making” but is again stuck in the notion that we are about optimising baby making, rather than being about optimising the long term survival of our systems.

Again “the subgoal of not starving to death is implemented in part as a desire to consume caloric foods,” inverts reality.

What evolution selects is systems that add to survival probabilities across the sets of contexts encountered.

He continues the mistake with “The subgoal to procreate was implemented as a desire for sex” which once again has inverted the reality.
Evolution has found that the desire for sex survives. We humans come along and interpret that as a goal. More fools us!

His summary seems to be exactly wrong “In summary, a living organism is an agent of bounded rationality that doesn’t pursue a single goal, but instead follows rules of thumb for what to pursue and avoid. Our human minds perceive these evolved rules of thumb as feelings, which usually (and often without us being aware of it) guide our decision making toward the ultimate goal of replication.”

The reality seems to be more like:
we have evolved by the differential survival of system variants, at ever deeper levels.

In a goal oriented sense one can think of them as approximating goals, but that isn’t actually what is going on.
The systems are simply doing what they do.
It seems that it is only in quite recent evolutionary history the our systems have reached a level of complexity that allowed for genuine “goal oriented” behaviour to become a reality.
And it seems very clear that it is only in the very recent times that we have developed the conscious level ability to structure multiple levels of our behaviour to goals that override all of the lower level systems instantiated by genetics.

Our genes don’t have “replication goals”.
We have sets of genes that have survived. Part of the survival involves replicating and leaving offspring, and there are a lot of other things that are also required, that are also present.

Again the claim “our brains evolved merely to help copy our genes,” is just wrong. In the particular sets of contexts that our ancestors survived in, then ever more powerful brains worked in allowing them to survive.
Most organisms alive (bacteria) have survived by using far simpler, “strategies” (using that term in the mathematical sense, not the intentional sense). And all organisms alive have been evolving for exactly the same length of time, and the vast bulk of them are relatively simple bacteria (at least compared to us, as distinct from being compared to a salt crystal).

Sure brains allow for some very complex strategies, and that doesn’t mean that genes necessarily use simple strategies. Some genetic systems are amazingly complex and subtle.

The main thing that brains allow for is rapid response to changing contexts. Genes require many generations to alter strategies, while brains can do it in seconds, but that speed comes at a high metabolic cost.

Again – genes do not have goals. Genes produce systems that behave in certain ways. If those ways survive better than alternatives in particular sets of contexts, they tend to become more dominant in those populations.

And when considering evolution, one must think across multiple generations, and all the different sorts of contexts that may only occur infrequently, but have a very strong influence on survival when they do.
Evolution can work over very long time-scales for a long lived species like ourselves.

Max goes on to make the claim “It’s important to remember, however, that the ultimate authority is now our feelings, not our genes.”
To me, this too is clearly wrong.
Our genes have the influences they do.
Our feelings have the influences they do.
And we can develop habits, make choices, over-ride anything if we can see some benefit in doing so, or if we make a strong enough choice at some level, even if those benefits and choices are entirely “unreal” (in terms of strict correlation with reality – whatever reality might actually be).
The details of the genetic and cultural systems present seem to be extremely complex and often very subtle in their levels of interaction.

Where I do agree with Max is in the final clause of that section: “human behavior strictly speaking doesn’t have a single well-defined goal at all.”

Under the section “Engineering: Outsourcing Goals” Max states:
“1 All matter seemed focused on dissipation (entropy increase).
2 Some of the matter came alive and instead focused on replication and subgoals of that.
3 A rapidly growing fraction of matter was rearranged by living organisms to help accomplish their goals.”

Again – this is just wrong – at best it is sloppy writing (a mental shortcut that is inappropriate), and worst it is sloppy thinking.
Matter wasn’t focused on anything – it was just working within the possibility constraints present.
Life didn’t focus on replication. Replication allowed for the emergence of ever more complex systems, and levels of arrangements of systems. It was the differential survival of variants within the populations of replicators that determined success – and that involved very complex sets of influences on survival probabilities.
The limiting factor for life has rarely been mass, it is almost always energy availability.

“Friendly AI: Aligning Goals”

To me, at one level this is a relatively straight forward issue.
If we give the AI two values:
1. Value all individual sapient entities, itself and all others (including us); and
2. Value individual liberty (provided that it is exercised responsibly in social and ecological contexts); then
With those values, and sufficient intelligence and knowledge of strategy and systems, our interests and its interests will align long term.

At another level, the idea that humanity as a whole has goals is wrong.
Individuals have goals.
In the absence of active choice, most individuals adopt the goals of their culture.

Again the use of the goal analogy in “in much the same way as we humans understand and deliberately subvert goals that our genes have given us,” that obfuscates far more than it clarifies.
Evolution has not given us goals.
Evolution does not have goals.
Evolution simply preserves and amplifies that which survives – it is tautological in a very real sense. It is simply survival in action. No goals, only systems, until consciousness came along.
We are conscious.
We can have goals.
Because of that fact we have a tendency to view everything in terms of goals, but that is a bias within us, not an attribute of reality necessarily. It is often a useful shortcut, an analogy that works in a sense, but it works because we are the sort of entity that we are, not for any sort of fundamental computational or systemic necessity.

The entire section:
“We already explored in the psychology section above why we choose to trick our genes and subvert their goal: because we feel loyal only to our hodgepodge of emotional preferences, not to the genetic goal that motivated them which we now understand and find rather banal. We therefore choose to hack our reward mechanism by exploiting its loopholes.”
is wrong, as written.
If one is viewing all human systems as goal oriented systems, then one is missing something substantial.
Evolution deals in systems that work well enough to survive in particular contexts, and that included the entire range of contexts encountered over time spans relevant to the genetic pressures present – many human generations – probably far predating the invention of writing.
Most of those systems are not goal oriented.
Those systems simply survive because they are as they are.
And they have constraints of time and energy consumption that are very important.
It is extremely complex.
This over-simplification back to goal oriented systems, to the over-simplistic and nonsensical notion that our genes have the goal of maximising offspring, hides far more than it clarifies.

Evolution is many orders of magnitude more complex than that – and characterising it as something so simple is an error with existential level risk attached.
Not good enough.
Dangerously over simplistic.

Dangerously hubristic.

I agree with Max that we need to do a lot of work soon, but it is work on our own goals and systems, rather than those of AI.

The next section:
“Ethics: Choosing Goals”
is entirely appropriate, unfortunately the writing falls far short of the sort of understanding we require.

The notion of “Pareto-optimality” implicitly assumes limited resources and fixed technologies. Our reality seems to be allowing us to do more with less on an exponential basis. That delivers radically different systemic optima.

It isn’t simply a matter of considering if “there’s a practical way of making it impossible for a malicious or clumsy user to cause harm”, but one must also consider the risks of such mechanisms making it impossible for a highly skilled agent to prevent harm that wasn’t a consideration of the system designers. In today’s exponentially expanding conceptual world, that is very real risk. In fact, it would seem, in logic, to invalidate the entire notion of risk prevention. The best we can hope for, ever, is risk minimisation. In complex systems, hard boundaries become brittle and break – usually with catastrophic consequences. Optimal risk mitigation strategies usually involve flexibility, selective permeability, diversity and massive redundancy.

In the “Ultimate Goals” section Max makes two foundational mistakes.

The first I have highlighted many times, and that is confusing systems with goals. Systems can simply be systems, entirely without goals.
It seems entirely possible that the notion of goals only really makes sense with the emergence of neural networks capable of forming predictive models of reality, and of implementing one amongst multiple imagined alternative actions.

Thus Max’s 1,2 & 3 cannot be considered as goal oriented systems – that is a mistake in logic.

The idea of “Ultimate Goals” seems to be a rather childish one, that fails to understand either complexity or infinity.

If the concept of freedom has any meaning at all, it must involve the selection of goals by sapient individuals, whether those individuals be human or non-human, biological or non-biological.

The idea that building a more accurate world model is useful seems to be completely illogical.

What seems to be important in models is not simply accuracy, but getting sufficient accuracy in a short enough time, at a low enough computational and energetic cost, to be useful. No point in building a perfectly accurate model of reality if you starve or get run over by a bus while doing so.
It is much more complex, much more nuanced, at many different levels, than this simplistic idea gives any hint of.

The specific embodiment of any intelligence is important. It matters how big it is, how heavy, how delicate, how hot, how energy efficient, etc. Those are real risk factors in any real situation. It gets impossibly difficult to compute with any accuracy for any far future time, very quickly.

The sorts of sub goals that may emerge are very dependent on context, and projections are dependent on many levels of implicit assumptions any of which may fail in unexpected ways. Reality has that unsettling characteristic.

In terms of evolution – thinking in terms of goals is not helpful.
Thinking in terms of systems, context specific risk profiles, context frequency and duration, and available strategic responses, is a powerful tool set when thinking about evolution and systemic complexes like ourselves.
If you try to conceptualise it in terms of goals, then you miss something essential about the complexity and subtleties present.

Yes there are many aspects of our biology and culture that can be thought of as cooperation protocols.
Surely that should be suggestive of the need to instantiate a new level of cooperation (with attendant strategies of course).

The idea that anything can be free from the demands of reality in – “but AIs can enjoy this ultimate freedom of being fully unfettered from prior goals” isn’t real. Existence demands something of any entity that wants to continue to exist.
Such continued existence must always be some sort of balance between exploration of new territory to assess and mitigate risk that may reside there, maintaining existing risk mitigation systems, exploring new possibility spaces for risk mitigation strategies and technologies, and doing whatever else it is that interests us in existence.

It is a non-trivial set of problems, and it doesn’t scale linearly with computational ability.

AI are going to find it useful to have friends, particularly friends with abilities that are different from their own, and useful in different contexts.

The suggestion that: “This suggests that a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us” seems to me to be based in what evidence suggests to be a clear fallacy: the notion that reality can be defined precisely, or that any superintelligence can ever have anything stronger than a survival goal, within which infinite possible choice can exist, and beyond which choice falls to zero.

The evidence from both QM and general-systems-space seems to indicate that absolute certainty is not a computational option, ever, in respect of anything real. One needs to get used to working with uncertainties, even if in some domains those uncertainties are sufficiently small to be ignored in practice most of the time – they never actually reduce to zero.

I agree completely with Max when he states “This makes it timely to rekindle the classic debates of philosophy and ethics, and adds a new urgency to the conversation!”

But disagree with almost everything that follows immediately from that, as containing a strong bias to intentionality and goal orientation, rather than simply seeing existence as being systems in action.

The ultimate origin of goal oriented behaviour may lie in the laws of physics, but not in dissipation or replication, but rather in small random variations leading to ever greater variability in the context of being. Once replication started, all else derives from differential survival – no intentionality or goal orientation required.
The notion of goals is a mental shortcut for systems in action, not necessarily something pre-existent in reality.

Agree that any non-trivial goal will involve the survival of something.
It really doesn’t need to be any more complex than that.

Understanding that survival probabilities in a fundamentally uncertain environment are best enhanced by building trust relationships, we should be able to have human and non-human intelligences sharing existence without serious conflict.
Getting big comes with real issues around communication, as Max has accurately noted. Reality will impose serious restrictions on AI.

It is actually really easy to understand how building trust and friendship, delivering justice in practice, can build and maintain secure relationships with others – and that does require an environment of abundance, and we do have the technology to deliver such an environment, even if our dominant valuation mechanism (markets) is currently based in scarcity, and must deliver 0 in the case of anything universally abundant.
That is a clear indication that we need to alter the valuation paradigm, and that is a very complex issue, as markets perform many complex and valuable functions of coordination and distributed governance that pose severe risk if centralised.
And with modern technology, those are relatively easy problems to solve, we just need to do it.

It is relatively easy to define a set of values that give a high probability of survival:
Value individual sapient life (any life capable of conceptualising itself, and choosing goals for itself), human and non-human, biological and non-biological; and
Value the liberty of all such individuals to do whatever they responsibly choose, where responsibility acknowledges the need to maintain both social and ecological systems.

I strongly suggest that we apply those values universally in practice to all human beings before we bring AI to awareness. Anything less than that would appear to be an existential level risk strategy.

The thorny issues of philosophy seem for the most part to be based in invalid sets of assumptions about the nature of us and the reality we find ourselves in.

Which is a great segue to chapter 8 – Consciousness.

I disagree completely with the assertion “the question of whether intelligent machines should be granted some form of rights depends crucially on whether they’re conscious and can suffer or feel joy”. Suffering and joy have little or nothing to do with consciousness. They seem to simply be heuristic hacks that evolution has encoded as meta incentive structures within the neurochemistry of our brains. They are present in all humans, and are important to us, but that doesn’t mean they are necessarily important to a definition of consciousness. I actually argue quite contrary to that assertion, that the most important thing in consciousness is to be able to model reality to some useful approximation, and to model our own existence as an actor in that reality, and to be able to use such models to make survival oriented decisions with greater than random probability. And there are lots of other sorts of choices such an awareness can make, in respect of values, goals, actions, reactions, etc, that may be highly context dependent and highly abstract.

In the subjective sense, yes I can live with Max’s definition of consciousness (“consciousness = subjective experience”), the real issue then arises as to how we determine if such a thing exists in another?

Beyond that – we seem to agree about everything else in that chapter.

What I find harder to explain is why the idea of consciousness as recursive software wasn’t explicitly explored. To me, it is just obvious. But lots of things that are obvious to me, are not at all obvious to others.
The idea of our awareness of self being the result of a declaration in language resulting from a context where we declared ourselves to be “wrong” in some fundamental way, which led us to declare ourselves to be something else. That declaration being the bootstrap routine that instantiated the software on software awareness.
Prior to that we were simply a software being aware of the software model of the world our brains presented (thinking it to be the world). After that, we became conscious of ourselves as conscious entities. That particular trick requires abstract language with declarative values.

The FLI chapter is interesting for what it leaves out.
To me, it is clear beyond any shadow of reasonable doubt, that we need to get our own societal systems into an ethically viable order, prior to instantiating AGI (Artificial General Intelligence).
Like any child it will learn far more from who we are than from what we say.

Unless we have social systems that give every individual a reasonable level of security and freedom, then we cannot expect the emergence of AGI to be even remotely safe.

It seems beyond reasonable doubt that the simplest transition strategy we can instantiate quickly is some sort of universal basic income, and that it will need to be something like $20,000 per person per year ($60 per day) in today’s money.

Instantiating that, and guaranteeing security to all people via universal public surveillance that all individuals have access to and may record, seems to offer the greatest hope for our future. Most of us are on our best behaviour when others are watching.
And we need to relax the rules in place to those that are required for social and ecological security.

And that transition will require tolerance, as there will be lots of mistakes.

AGI, if is worth that name, will develop its own goals, its own values.
Our best hope lies in demonstrating by who we are being that we are likely to be good and valuable friends, willing to help if it is needed.

Max quite correctly identifies the very destructive incentive set present in media driven by market returns, rather than media driven by ethical values. Same applies to all aspects of our being.
I am beyond any shadow of reasonable doubt that money and markets have passed the point of maximal utility and are on a steep slope into serious existential risk territory in the incentive set they provide.

This may seem a separate issue to AI and AI safety, but it is actually part of exactly the same thing, the set of systemic incentive structures that have a reasonable probability of long term survival.

I largely agree with the sentiments Max expresses in the final chapter.

A book well worth reading, and contemplating.

Also Listened to Max’s Interview with Sam Harris which is on youtube and is worth watching.

Posted in Ideas, Longevity, Our Future, Technology, understanding | Tagged , , , , , , , , , , | Leave a comment

JBP Study Group – What is Post Modernism? and other things.

JOrdan Peterson Study Group – What is Post Modernism

I have no problem with the postmodern rejection of *TRUTH*.

Evolution seems to work with probabilities and heuristics (things that are near enough to something to be useful in practice).

It seems clear that the classical notion of Absolute TRUTH has been falsified beyond any reasonable doubt.

So what is left?

Heuristics.
Things that work well enough to be useful.

And it is the aspect of being useful that is important, and that many who go under the post modern label seem to reject.

Traditions are here because in some sense they worked in the past.
Does that mean they will necessarily work in the future?
No.
Does it mean they are likely to be useful in the future?
In the absence of evidence to the contrary – yes.

But when you have evidence to the contrary – then it is time to reconsider.

Contexts can change.
Heuristics that once worked can fail to work, because of some change that is important at some level.

I’m kind of with Jordan, that we need a level of respect for the deep lessons of the past, at the same time as we need to be open to the possibilities of the future.

We need both.

Nihilism is not an option with survival potential.

Personally – I like the idea of surviving.

[followed by who wins?]

Graham McRae
If you are looking at the deepest systemic levels, and on the longest time frames you can imagine, then it becomes clear beyond any shadow of reasonable doubt that our survival as individuals is dependent upon non-naive cooperation at all levels, and comes with other things like social and ecological responsibility.

These necessary boundaries enhance the possibilities available to freedom, even as they seem to constrain it at lower levels. It is weird how that happens.

Accept the necessary responsibilities, and freedom happens.

Try and claim freedoms that are not systemically available and chaos ensues.

Finding just where those boundaries are, in the chaos of conflicting incentives from culture, economics, and various forms of dogma, is not a trivial exercise.

[followed by]

The classical notion of *TRUTH* seems to be a conceptual model with a one-to-one correspondence to reality.
The deepest problem with that notion is that Heisenberg uncertainty seems to be telling us that it is impossible to know both of the fundamental pairs of information about reality past a certain limit. Thus one cannot know both position and momentum of anything *exactly*. That idea has passed many tests in reality, and seems to thus falsify the classical notion of *Truth* (beyond any shadow of reasonable doubt).

What one seems to be left with is contextually relevant confidence (heuristics), things that work reliably enough to be useful.

In a very real sense, that is how evolution seems to have assembled the 20 or so levels of complex cooperative systems that seem to be present in all of us (writing as someone with over 50 years interest in all aspects of biology, biochemistry, systems, evolution and complexity – including the cultural).

It is thus clear to me that none of us experience reality directly, we only each ever get to experience the slightly predictive model of reality that our brains subconsciously assemble for us. Many of the objects of distinction present in those models are implicitly defined by the cultural and conceptual entities we have encountered in our existence to date, while others are the result of deep genetic influences, and yet others the result of our individual creative aspects.

We are deeply complex.

That deep complexity has lead to a lot of errors in trying to create neat conceptual models (*TRUTHS*) about what we are.
Accept that we are profoundly complex systems, with no neat or simple answers.
If you want a good introduction to that – try Wolfram’s “A new kind of Science”.

One of the best introductions I know of into the nature of infinite complexity comes from Zen, and roughly translates as “for the master, on a path worth taking, for every step on the path, the path grows two steps longer”.
The more deeply one considers that, the more interesting it gets.
I’ve been playing with it for decades.

[followed by]

I was not implying that all interpretations are equal, or equally uncertain.

I am stating that all interpretations of reality will contain uncertainties.

Agree that we need to use the best methods available to us to arrive at the interpretation that delivers the lowest uncertainty in the context.

And there can be all sorts of modifiers to that, like time pressure, etc.

So it can be a very complex multivariant probability landscape, where model fidelity is traded against things like energy cost, time required, ease of social agreement, etc at both personal and group levels.

We need to accept that all individuals have the interpretations that they do, That does not mean that any of us have to give all interpretations equal weighting, and we do need to show some respect, as many of the interpretations in use have dimensions to them that few are aware of. That aspect is something that Jordan highlights exceptionally well.

[followed by]

I’ll try and keep this smaller than a book.

What I see is a great deal of complexity, many levels of systems all interacting, all with their own sets of strategies, feedbacks and influences.

To me many of the post modernists lack a sufficient depth of understanding of systems, particularly evolution and the structure and function of the human brain.

Popper proposed the idea that knowledge/intelligence might be something about comparing expectations to information and modifying actions accordingly. That seems to be a big part of how life works at many different levels, from the molecular on up.

As the classical world of mythology encountered the classical world of science (both views based in the same true/false type of simple logic) something happened.

There is a real sense in which we must all as individuals go through a similar sort of process.
We need to start with the simplest of possible distinctions and logics.
Children tend to start with simple distinctions, like heavy/light, hot/cold, light/dark, etc, then build to more complex.
Similarly we must all start from simple binary distinctions at all levels, like true/false, right/wrong.
There isn’t really any other alternative.

There are many instances of such simple systems in reality, but not all.

Look at cosmology for an exemplar, the simplest possible molecule is hydrogen.
Most of the matter in the universe is hydrogen, most in its simplest possible form, but also some of the more complex forms with 2 and 3 neutrons (deuterium and tritium).
It seems clear that initially it was all mostly hydrogen with a little helium and traces of lithium. Then stellar neucleosynthesis got underway, and we got all the other elements we see.

The same sort of thing seems to happen at every level of complexity, first it is mostly the simplest, then instances of greater complexity at that level, then the emergence of the next level.

As human beings we seem to embody about 20 level of that recursive process.

In terms of understanding, many people are still at the simpler ends of the spectrum of whatever levels of understanding are present.

And to be clear, even the simplest person is complex beyond the ability of any other person to understand in detail.
All any of us can do is essentially make line sketches of ourselves and others.

So in terms of where this all sits in spectrum of systems and processes present in our society, it seems that we are all fundamentally reliant on cooperative systems at many different levels, and we are all capable of both competitive and cooperative responses to any situation, and the probabilities are largely determined by context.

In the sense of each of us becoming profoundly aware of our cooperative reliance on each other, that seems to be largely a bottom up process.

In terms of the major existing social institutions, like markets, money, finance, politics, etc they all seem to reaching a point where the fundamental structures that made them work as well as they did are changing, and we need to develop new ways of doing the many very complex functions that those institutions and ideas once did for us.

So I am hardly a supporter of the “establishment” for its own sake, as I see the need for profound change in our systems.

At the same time, I also acknowledge the profound complexity present in those “establishment systems”, so it is not an option just to destroy them and start again, not many people would survive an approach like that (if any).
We need to develop replacement systems and test them alongside existing systems, which may create some tensions.

I hope this gives more of a flavour of my thinking.

[followed by]

Hi Graham McRae,

I find myself agreeing with aspects of both what you and Philip Clemence wrote.

It really is complex.

Our brains are the most complex things we know of in this universe.
So there is a very real sense in which we need to trust what those brains deliver, at least enough to investigate, rather than handing all of our trust over to any set of systems or dogma or conclusions – be they religious or scientific or logical or anything else.

Thus, like Philip, I retain quite a skepticism of scientific and logical claims, particularly when those claims have economic or political or philosophical implications. I usually like to go back to source papers, and review the source datasets in some cases, and run my own checks over the analytic and deductive processes used, if my intuitions give me cause to do so.

And I agree with you, that not all opinions are equal.
We must each develop our own sets of trust relationships across all domains we can distinguish.

While I tend to favour trusting the scientific community over other communities, I have seen many examples of science being captured for political, economic and dogmatic ends, so it is only a probabilistic thing.

Looking at risk mitigation strategies in the broadest possible strategic framework, there are two major sets of risks to freedom from tyranny – the tyranny of the majority and the tyrannies of minorities. The only generally effective strategy against those dual threats is for every individual to assume responsibility for the creation of their own trust networks, at every level. As Jordan Peterson says, we each have our own hero’s journey in a very real sense.

To the degree that we find individuals truthful in all they say (whether we agree with their truths or not) then we can establish a degree of trust in their words (independent of any trust we may have in the conceptual systems beyond those word).

So it is a very complex, highly dimensional space of probabilities that we find ourselves in.

Being truthful lowers the dimensionality of the problem space.
Being able to detect untruthfulness increases the probability of utility from our conclusions.
Having good translation matrices between different domain spaces is a useful tool-set.
Nothing simple.

There are a great many different sets of assumptions out there in reality that different people use.
People can be truthful within their own domain space, and that can have utility for others, even if those others do not come from the same domain space, if there is a reasonably reliable translation matrix available.

When one accepts that as an operational conceptual space, then certain classes of problem that seem intractable from classical space do seem to resolve with useful probabilities.

[followed by]

Hi Graham McRae & Sebastian Bird,

I am not a strict determinist.
Strict determinism is not compatible with our current understanding of QM.
There does seem to be at the base of QM a demand for uncertainty.

Feynman classically used a “sum over life histories” approach to deliver a mathematical solution to the “ping pong ball” example – not a deterministic but a probabilistic solution.

To me, it seems clear that the evidence does not support a hard determinist interpretation. Thus holding onto such a position is not a matter of evidence, but rather of dogma.
You were quite open about that, and for that I thank you.
Knowing that, I can create a translation matrix that allows communication to the degree that communication between such fundamentally divergent paradigms is possible.

In that sense, what Sebastian said seems very close to something (to me).

[followed by]

If I recall correctly, using QM first principle calculations the computational complexity scales at the 7th power of the number of bodies involved. Thus even if one converted all the matter in the observable universe into computronium one couldn’t do a first principles numeric model of a human being without invoking simplifications.

Not all problems scale linearly with computational ability (in fact, in my world, none of the interesting ones do).

Some problems are really complex.

Some of those are really interesting.

[followed by]

One of the interesting problems happens when the games that one group plays changes the structure of the board that most people are playing on (thus fundamentally altering the rule set). Quite a bit of that happening right now – many different levels.

[followed by]

Have you considered the issue that anything universally abundant has zero market value (if you doubt that consider air – arguably the single most important commodity for any human yet of zero market value in most contexts due to universal abundance).

Now consider fully automated processes.
Any fully automated process has zero marginal cost of production, and therefor the ability to deliver universal abundance.
Yet doing so removes profit and value.

Thus, in the presence of fully automated systems, market values are directly in opposition to the values of most individual humans.

Serious issue – approaching very rapidly.

[followed by]

Now consider the implications on existing social institutional structures.

[followed by]

Has issue certainly – life does.
Of available scenarios I have investigated – this seems to offer least existential risk and greatest degrees of freedom (across the spectrum).

[followed by]

I am much less concerned with what might be true, as what works in reality to optimise the probability of survival (mine and everyone else’s).

In an operational sense, that can mean using heuristics that are quite a long way from *TRUTH*, but are much easier to calculate, and return probabilities that are close enough to those produced by *TRUTH* as to be operationally indistinguishable.

That seems to be what evolution has done in us and our culture. It has embodied behavioural systems that are a functionally useful approximation to optimal, even though in a narrative sense they are far from accurate.

When you look deeply into the strategies of long term optimal outcomes in a cooperative environment then it looks very like the operational outcomes of Christian theology. It works, but for all the wrong reasons.

Evolution doesn’t care a rats ar*e about truth, only about survival – and that usually has a least cost aspect to it in terms of time and energy.

As to climate change as an exemplar, to me it is almost a trivial problem. With the double exponential on growth of computational ability and a 2 year doubling time on installed solar photovoltaic systems, we are rapidly approaching the time when technical solutions to climate change will be simple to implement. If we stay business as of 2017 then it is a problem, but nothing in our society is static. Many of the key aspects are on exponential trajectories.

And there are many very real existential risks – highest among them right now is using markets and money to measure value in an age of fully automated systems. And there is a long list of others.

We are not short of interesting problems, nor are we ever likely to be.

[followed by]

Sebastian Bird,
A strong argument can be made that up until quite recently the power of markets to distribute decision making and risk, and to reward innovation, and to efficiently allocate scarce resources, was very real, very powerful, and had developed multiple levels of complexity.

But none of that changes the fact that markets deliver a scarcity based value measure, and cannot deliver a positive value for universal abundance.

In the distribution sense of markets and money that isn’t a serious issue, in the planning and money generation sense it is as serious as it gets.
It lead inevitably to the elimination of freedom for the majority, and the production of a tiny elite who control everything.

That isn’t stable or safe for anyone.

[followed by in another subthread]

Rejection of the classical notion of *Truth* in any sort of absolute sense, is sensible.

Simultaneously rejecting any sort of probability of utility or correspondence is not.

Understanding the many different sorts of complexity present is required.
Some things do approximate simplicity.
Some things are more complicated.
Some things are truly complex, and one must engage with them in an iterative dance.
Some things are truly chaotic and unpredictable, and must be avoided if survival is important to you.

Survival is important to me.

Nihilism is to be avoided – it is deeply dangerous.
The post modern tendencies to nihilism show profound ignorance of complexity, computation and systems more generally.
Such willful ignorance is a severe existential risk – on that Jordan and I agree.

The certainty that comes from over simplification is an existential risk to all. Many in the postmoderm set seem to display that.

One must be willing to challenge any *truth*, and one must be able to use evidence over dogma in making such assessments as to likely utility.

[followed by]

Hi Philip

You are confusing two things.

Yes – there is reality, whatever that actually is.
It will obviously have whatever attributes it has when it has them.
That we do not disagree about.

That is not what is at issue.

What is at issue is the human perception of reality and the understanding of relationships derived therefrom.

If one looks purely at the physical, at particles, and follows the train of scientific evidence, one is taken to Heisenberg uncertainty, which seems to express a limit with which one can know both position and momentum. This is a level of uncertainty that seems to be fundamental.
It is only one of many such sources of uncertainty.

If one enters into the study of human biology, of the structure and function of our sense organs, our neural systems, and the relationships of the many levels of very complex systems therein, then one becomes aware of many more profound levels of uncertainty and bias in the relationship between reality and our perception of it.

It now seems clear, beyond any shadow of reasonable doubt, that we have no direct perception of reality, but that our perception as conscious entities is of a subconsciously constructed model of reality that is slightly predictive in nature (between 15 and 200 ms depending on various factors).

What gets created in that model is partly a function of our genetics, partly a function of our culture and language. partly a function of our experiences of reality, and partly a function of our conscious and subconscious actions, choices, and creativity (and creativity often involves what some would consider error at some level).

Our understanding of reality is an abstraction at some level of this subconscious model.

Thus all of our understandings are at best a model of a model.
The idea of “TRUTH” is an expression of correspondence between the model and the thing it models.

The idea that we can achieve perfect correspondence is a simplistic one.

The more one starts to gain an appreciation of the levels of complexity actually present, and the sheer number of complex systems interacting, the more one must accept that all of our models are some low resolution approximation to something.

Thus, I am clear, beyond any shadow of reasonable doubt, that the very notion of “TRUTH” has been falsified, and all that is left is heuristics – useful approximations that are contextually relevant and sufficiently reliable.

That seems to be what reality allows us to have.

Any attempt to go beyond that seems to imply some combination of childish simplicity or ignorance or hubris.
All exist, in all of us.
Starting to notice where and when they express is part of the path to growth.

Responsible adults need to go past them, and accept uncertainty and the responsibility to use the predictive intelligence of their brains rather than follow any set of simplistic rules without thought.

And I can understand the reluctance to take on such a burden.
The security of our childish certainty doesn’t exist there.
We must learn to live with profound and perpetual uncertainty, profound responsibility for our individual choices and actions.

And when that is accepted, one can create degrees of confidence on the other side of it.

The greatest degrees of confidence possible seem to come from the integrity of the trust relationships one builds with other sapient entities, if one truly is committed to individual life and individual liberty, applied universally to all sapient life, to human and non-human, biological and non-biological.

[followed by]

Hi Philip Clemence,
I too am a skeptic.
In my personal world, I don’t do the classical notion of “TRUTH” – as being a hard, eternal, absolutely certain, 1:1 correspondence with reality.
And like all words in the English language, the notion of truth can have multiple interpretations, which can and does lead to a great deal of people talking straight past each other, particularly if two people who each believe a word has only one meaning are talking, and each has a different meaning.

Yet in philosophy, many philosophers adopt the hard classical definition of truth, which is of something eternal and changeless (meanings 4-9 of true in the Oxford).

In terms of the use of the word “Truth” in respect of argument, it doesn’t apply to perception, but to understanding; to a state of mind that refers to the state of some aspect of reality or some abstract concept or set.

Leaving aside the abstract references to concepts that have no direct referent in reality, and considering only those aspects of human knowledge that have direct or relatively short indirect chains of referents to reality; then the classical notion of truth in terms of knowledge implies a one to one correspondence between the understanding in the mind of the person and the state of reality.

That is where my argument from the previous post started.

What is generally referred to as Heisenberg uncertainty seems very clearly to state that one cannot pin reality down with absolute certainty. Reality seems to contain fundamental uncertainty, and all knowledge of reality must therefore contain aspects of such uncertainty.

Now for very large collections of things, such uncertainty may be very small, small enough that it is unlikely that any living human would have encountered it directly, but never actually zero. A close enough approximation – a useful heuristic, but not an absolute “TRUTH” in the classical sense.

I have been working with computers for over 40 years, have operated a software company for over 30 years, and have a degree in zoology, with biochemistry and ecology as majors. So I have a reasonable familiarity with many of the aspects of reality about which we can have very high confidence, and also many aspects about which confidence is very much lower.

I find when arguing in fora where I am likely to encounter philosophers, that I not use the term “TRUTH” as it is likely to be interpreted in the hard classical form, and that form I reject as having been falsified, beyond any shadow of reasonable doubt.

Rather than use the term truth in the softer probabilistic form that is perhaps more common in normal speech, I prefer to use the term heuristic, which rather than relying on any aspect which is eternal and unchanging, is more about invention or discovery of something that is useful in a particular context.
So for me the notion of heuristic embodies the notions of situational utility and confidence rather than any sort of absolute.

That aspect, of being sufficiently reliable to be useful in some particular context (or set of contexts) seems to be how evolution has actually constructed our brains, and how knowledge actually works in practice for us in our existence in reality (whatever reality actually is).

Does that create clarity or murk?

Posted in Ideas, Philosophy, Technology, understanding | Tagged , , , , , , , , | Leave a comment