Max Tegmark’s Book – Life 3.0 – bought it this morning – Updated 21 Sept 2017

Will AI enable the third stage of life?

In his new book Life 3.0: Being Human in the Age of Artificial Intelligence, MIT physicist and AI researcher Max Tegmark explores the future of technology, life, and intelligence.

In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself:
• Life 1.0 (biological stage): evolves its hardware and software
• Life 2.0 (cultural stage): evolves its hardware, designs much of its software
• Life 3.0 (technological stage): designs its hardware and software

[Review starts about 2 screens down]

Agree with Max, and with the idea of fuzzy boundaries across multiple aspects of very complex domain spaces.

So that one could say that life 2.0 got under way seriously with the development of abstract language and the design of stories, that probably occurred somewhere between 8,000 and 5,000 years ago, possibly a bit earlier.

And it could be argued that the invention of writing was the start of evolving our hardware, as an adjunct to the transmission of information, and it was a very slow start, waiting thousands of years before the invention of the printing press, then hundreds of years for the telegraph, and now we have digital storage and transmission as well as computation.

And all three forms continue in all domains, so it is a very complex and increasingly dimensional information landscape, particularly when one factors in the many aspects of strategy and risk mitigation and the influences of those over the deep time of genetic and cultural evolution on our current dominant cultural and technological and behavioural and conceptual phenotypes; and how those instantiate and influence each of us individually.

In the deepest sense of risk mitigation, and acknowledging all the many real risks involved in AGI, it still seems that AGI is the most effective risk mitigation strategy available, when all forms of risk are factored in. And that statement is based on the assumption that we very quickly recognise the risks posed by reliance on markets and money, and the risks posed by the twin tyrannies (of the majority and the minority) across all domains; and rapidly instantiate global level cooperative strategies that deliver the reasonable needs of survival and freedom to every individual human – no exceptions.

Without that sort of demonstration in reality of our respect for sapient life, we are all at serious risk.

[I bought the book and have read it – this critique was completed 21 Sept 2017]

I like Max’s style, and respect and align with many aspects of his thinking, and there are some significant failings and omissions, and it is one of the few books I have paid money for in recent times – so it is something I value in many different senses, money, time, intellectual breadth, etc.

As a book review:

The introduction is interesting, but fails to account for the effect of distributed manufacturing, and the ability of such independence from any sort of trading system to dismantle the very concept of markets and money.

While I have some substantial criticisms of some of the ideas, I am very much aligned with the general trajectory of Max’s thinking and work. Well Done!!!

Chapter 1

Max does a reasonable job of defining life as information, but I would take it explicitly deeper.
I would say that life is not simply the ability to replicate, but to be able to do so with variation.
The error rate in replication is critical, too high or too low and nothing much happens.
Similarly if one goes back to the systems that allow matter itself, the level of quantum uncertainty (the error rate if you will) is a critical factor in the emergence of complexity and life.

Max skips the role of cooperation in the emergence of complexity. That is a serious failing.

I would argue that Max also glosses over the role of evolution in the emergence of our operating systems, and the various levels of incentive to action and incentive to willfulness contained therein, and overestimates the degree of choice involved in the actual action of most people.

And yes I agree with him that there is the emergence of a distinction of design and it is one that is gradual and is shared with older systems at the same level (recurs to as many levels as are actually instantiated in any specific individual).

There do in fact appear to be non-trivial degrees of complexity present in the interplay between evolution and choice even at the highest levels of awareness.

Certainly there is a much clearer degree of separation in the degree to which operating algorithms can be instantiated and modified within the lifetime of a single instance of an individual entity. And even in that aspect there does seem to exist considerable fuzziness.

Thus the substantive difference is not necessarily to design its software but to instantiate new and different and potentially novel software that is more appropriate to survival in current contexts.

The claim “Your synapses store all your knowledge and skills” seems rather too strong, and certainly the synapses can store about as much information as Max claims.

Again the claim “enabling us to communicate through sophisticated spoken language, we ensured that the most useful information stored in one person’s brain could get copied to other brains” seems too strong, and it is certainly true in some instances.

In other instances it seems clear that even decades is too short a time to accurately write out all the new and useful information that a really active modern brain can instantiate.
Thus the amount of information within some brains is always likely to vastly exceed the amount that is actually communicated to others.

Max goes on to make a series of claims that I find outrageously false “None can live for a million years, memorize all of Wikipedia, understand all known science or enjoy spaceflight without a spacecraft. None can transform our largely lifeless cosmos into a diverse biosphere that will flourish for billions or trillions of years,…”. Those have in fact been my clearly stated intentions for over 42 years – since October 1974, since the logic of the possibility of indefinite life extension instantiated in my brain beyond all reasonable doubt.

I may not yet have dotted all i’s or crossed all t’s in the process, and it is substantially closer than it was 42 years ago, and I may not make it, and I might just manage to be a part of the process that does instantiate those things and does stick around long enough to see plate tectonics transform the face of our planet, and conscious sapient life (human and nonhuman, biological and non-biological) spread across all accessible galaxies.

Again – I claim Max misses too much when he defines life 2.0 as designing its software. Life 2.0 is about the ability to instantiate new software independent of biological evolution of the bodies that instantiate it. The impact of evolution in the depths of that process must not be underestimated. To claim that even a substantial portion of it has been designed seems to me to be hubristic delusion.

And I do acknowledge where Max is going with his main theme, and I make the strong claim that he vastly underestimates the complexity present and the continuing importance of evolution at ever more abstract levels. And understanding evolution in this sense means to be able to see the fundamental role of cooperation in the emergence of all new levels of complexity, and the need for attendant stabilising strategies to detect and remove strategies that cheat on the cooperative.

I agree that AGI is close, no argument there.
I disagree that many AI researchers have any real understanding of the degrees of complexity of the relationships between the systems instantiated by evolution that actually allow us to survive.

The depth of the influence of deeply evolved and interconnected systems on our survival probabilities is a matter of substantial disagreement across communities of understanding.

Certainly we have instantiated some amazing technologies, and many would contend that we have substantially added to the existential level risk in doing so.

I am not in any sense a fan of returning to any sort of mythic simplicity.

I am very much a fan of understanding the degrees of complexity present, and the many ways that evolution has found to reduce risk in practice, and of instantiating as many independent such systems as we can to mitigate existential level risk at the same time as we value individual life and individual liberty.

It is a very complex and highly dimensional probability space that we find ourselves in.

There are not, nor can there be, any sort of hard deterministic solutions to the problems (the logic of that is clear beyond any shadow of reasonable doubt).
And there is a great deal we can do to reduce that existential risk, and we need to start doing it very soon.

There is not, nor can there be, any singular “What will happen, and what will this mean for us?”

That level of confidence instantiates one of the greatest levels of risk to freedom – the twin tyrannies (minority or majority).

Misconceptions

Max asks a series of questions:

What sort of future do you want?
One where all sapient individuals have minimal survival risk and maximal freedom (including reasonable empowerment with the tools to explore and instantiate that freedom in whatever way they responsibly choose).

Should we develop lethal autonomous weapons?
No. Not compatible with minimal risks to sapient life.

What would you like to happen with job automation?
Ensure that everyone has access to the products of such automation.

What career advice would you give today’s kids?
Explore everything, yourself and your values highest among them. Question everything. Trust, and be alert for cheating strategies (all levels).

Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth?
I want freedom, that doesn’t mean leisure necessarily, and it does mean not having to work simply to survive. I want to choose what values I give my time to. I want the tools to do whatever I responsibly choose.

Further down the road, would you like us to create Life 3.0 and spread it through our cosmos?
I want to travel the cosmos myself, in time, as the technology is fully tested. I would expect to travel in a community that included a range of intelligences, from human to AGI, across a substantial spectrum, that might include some fully artificial biologically sapient organisms and a range of cyborgs.

Will we control intelligent machines or will they control us?
In a society that respects sapience and freedom, there will be control only in the case of immediate existential threat to another. In all other cases it will be a matter for negotiation and agreement. I expect to have biological and non-biological friends, and quite a few who don’t fit neatly into either camp.

Will intelligent machines replace us, coexist with us or merge with us?
Coexist and merge is my plan. Nothing else seems to offer significant survival probabilities.

What will it mean to be human in the age of artificial intelligence?
It will mean whatever we choose to make it mean. Meaning is in story. We can write our own stories.

What would you like it to mean, and how can we make the future be that way?
I want it to be an age of security through cooperation. And we need to start by recognising that markets have passed their peak of systemic utility and are now heading steeply into severe existential risk territory. We need a far more cooperative base to our society. Implementing a Universal Basic Income seems to be the best transition strategy available in the short to medium term.

Max raises the notion of sapience (intelligence) vs sentience (feeling). My vote is for sapience. I can see sentient entities that would have not the slightest hesitation in killing me (bears, tigers etc). That sort of risk I cannot tolerate. Sapience at least allows of the possibility of acknowledging the right of all other sapient entities to exist, and for all such entities to benefit from that awareness. In a very real sense, that is a minimum definition of sapience.

Cheat sheet
Life – definition insufficient, and it is close to something.

Control is too hard a term, provided it is anything less than extermination, the best we have is influence.

Chapter 2

I can’t choose between the two options in the winograd example in wikipedia https://en.wikipedia.org/wiki/Winograd_Schema_Challenge
Nor could my wife – either worked for both of us, and both of us knew which applied to what in either case.
I don’t think I’m a machine 😉

Substrate independence for programs – yes in a sense, and also a major caution.
Just because a program can run on any substrate does not mean that the effect of a program is the same on any substrate.
A program that takes 30 seconds to compute an “avoid threat” response will be destroyed by many threats that manifest in less than 30 seconds.
A program that executes the same response in 30ms is likely to survive much more frequently in reality.
How fast a program executes on any given substrate and how much energy it requires to do so are very important aspects in reality.
We may be able to create human level AGI quite soon, but it will likely take something approaching the energy of a small city to power it.
Getting that power down to 50 watts may take a while.
So yes substrate independence, and there are other very important factors present at the same time. To get a reasonable picture one must be conscious of all of the important influences in the context.

Chapter 3

Asks 4 questions:

1/ How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?
By getting them to create models of the world, with objects with properties, and teaching them about relationships, trust and strategy and freedom and respect.

2/ How can we update our legal systems to be more fair and efficient and to keep pace with the rapidly changing digital landscape?
By making them fundamentally based in respect for individual life and individual liberty, acknowledging that as individuals we must be socially cooperative entities, and that requires reasonable responsibility in social and ecological contexts. Thus moving from rule based systems to value based systems, with incentives and disincentives that are proportional to impacts.

3/ How can we make weapons smarter and less prone to killing innocent civilians without triggering an out-of-control arms race in lethal autonomous weapons?
We can eliminate the need for weapons, by ensuring that every individual experiences security. Oddly best done by empowering everyone with the ability to respond strongly, while at the same time placing many safeguards in place to reduce the probability of response in error.

4/ How can we grow our prosperity through automation without leaving people lacking income or purpose?
By transition away from market based thinking. Initiating that process by instantiating universal basic income, allowing the development of systems that do not require exchange.

In the sense of moving to be proactive, it must be clear to anyone who looks seriously at the incentive structure of market based values, that market values fail to approximate human values in the presence of fully automated systems.

If we don’t get proactive in this domain very soon – we are all in very deep existential risk territory.

When you look at the evolution of complexity from a systems perspective, new levels of complexity always come out of cooperative systems, and cooperative systems require secondary attendant strategies to prevent invasion and takeover by cheating strategies (and the vast bulk of the finance and banking sectors can now be accurately characterised as cheating strategies on the human cooperative – consuming vast resource for no real output in terms of survival value).

Bugs and verification

Sure verification and testing help, and perfection is not an option.
Even 20 years ago I had developed systems that would have taken thousands of years to test across all possible variations.
Complex systems have that very uncomfortable attribute of being fundamentally uncertain.
There is no 100% cure for that, even in theory.
And sure, we can develop ever better testing systems, and that is a very good idea, and one cannot eliminate uncertainty from life (except by dying – and personally I’d rather not try that approach).

Again – the implicit acceptance of the notion that finance has anything significant to do with the efficient allocation of resources has to be challenged, not simply accepted. I make the strong claim that it no longer operates in that domain to any significant degree, and is far more accurately characterised as a cancer on society.

Under Laws – the first explicit acknowledgement of cooperation:
“We humans are social animals who subdued all other species and conquered Earth thanks to our ability to cooperate.”

Giving People Purpose Without Jobs – is great, particularly in the sentiment:
“once one breaks free of the constraint that everyone’s activities must generate income, the sky’s the limit”. But there is no explicit reference to how to do that, nor of the impediments to that embodied in the current economic and political systems.

AI- AGI – abstract layer for modeling. Extensible modeling objects – rules of space, time, modes of interaction, costs, benefits, risks – time of computation vs heuristic probability of utility – instantiate different populations and see how they perform against each other.

Bottom Lines:
Again the explicit inclusion of “financial markets” without any explicit exploration of the existential risk posed by those markets seems to me to be a very dangerous approach.

The section:
“When we allow real-world systems to be controlled by AI, it’s crucial that we learn to make AI more robust, doing what we want it to do. This boils down to solving tough technical problems related to verification, validation, security and control.”

The notions of verification and control seem to be too strong.

Many aspects of systems are fundamentally uncertain.

Many aspects of risk cannot be avoided. For many aspects of risk, building trust relations is the best available strategy.
I make the strong assertion that by taking a strong control approach with other sapient entities we are pushing deeply into serious existential level risk strategic territory.

Many of us are strongly resistant to strong control measures, yet highly available to trust and cooperation.
This is probably the deepest recursive problem in the strategy space we exist within.

The claim that “AI can make our legal systems more fair and efficient if we can figure out how to make robojudges transparent and unbiased” is founded on the assumption that our laws are fair and ethical in the first place. I make that strong claim that such a claim seems unfounded in our current evolutionary context. The current legal system seems very clearly to have been “captured” by what the majority of the population would term “cheating strategies”. Making a system that is already fundamentally and profoundly unfair more “efficient” can only decrease the incidence of fairness more widely. That is an area of very deep risk.

Keeping our laws updated to deal with AI is only a very small part of the profound issues facing our legal systems.
Adapting our legal and wider societal systems to actually value individual life and individual liberty, within the bounds of social and ecological responsibility, and in the context of the levels of universal abundance made possible by the exponential expansion of fully automated systems; is a profoundly complex issue, particularly in the presence of many (potentially infinite) levels of awareness and variations on ethical and cultural norms. In such an environment of exponentially expanding sets of fully automated systems, market based systems deliver incentive sets that are fundamentally unstable and deliver rapidly rising existential level risk.

The claim “This need not be a bad thing, as long as society redistributes a fraction of the AI-created wealth to make everyone better off” can be read as going some way towards implicitly addressing the issues above, but leaves far too much room for systemic failure. Let us be explicitly clear that the “fraction” of wealth referred to above must be greater than 0.5.
I am no fan of equality, we all need to be different; and I am no fan of poverty either, we all need to have reasonable levels of resources and opportunities. And with such wealth comes responsibilities.

Freedom is not freedom to follow whim – that leads to death. Survival places demands upon us all, all levels.

The claim “To sort out the control issue, we need to know both how well an AI can be controlled, and how much an AI can control” only states half the problem.

The much bigger issues are:
1/ who are “we” – precisely; and
2/ what do we mean by “know”; and
3/ what do we mean by “control”.

I strongly suspect that many “we”s see “AGI” as less of a risk than some of the other “we”s.

I also strongly suspect that the very idea of control is far to strong and as such poses significant existential risk in and of itself.

The idea of cooperation, sapience wide, seems to be the lowest risk approach, and we need to have instantiated that at least across all human beings before instantiating AGI.
The ideas of local conversations and agreements, inside a context that accepts diversity, while demanding responsibility, seems to be workable.

The claim made that:
“The history of life shows it self-organizing into an ever more complex hierarchy shaped by collaboration, competition and control” seems to me to be more false than real.

The history of life seems to be an exploration of the space of what survives most effectively across the range of contexts encountered.
Evolution seems to be about differential survival rates averaged across all the different contexts encountered over deep time.

In contexts where the dominant source of threat comes from other members of the population, then competitive modalities tend to dominate with the overall selective tendency being towards greater simplicity.

In contexts where the dominant source of risk comes from factors outside of the population of others of the species, then evolution tends to favour cooperative strategies, and complexity tends to increase. And raw cooperation is always vulnerable to exploitation and requires attendant strategies for stability – which can lead to something approaching an evolutionary arms race.

This process seems to be potentially indefinitely recursive.
So the idea of hierarchy isn’t necessarily primary, and competition isn’t necessarily important, and both will be present to some degree in particular sets of contexts.

The issue isn’t simply a matter or coordination, though coordination is an aspect.
The issue is much more deeply and profoundly about cooperation, even across levels where the nature of the cooperative entity cannot be distinguished (because of the levels of separation).

Agree completely with the final element of that table:
“We need to start thinking hard about which outcome we prefer and how to steer in that direction, because if we don’t know what we want, we’re unlikely to get it.”
I have been thinking about these issue, very seriously, since 1974 most certainly in the light of knowing that indefinite life extension was possible, and arguable since nuclear confrontation of 1962 and the global level existential risk embodied in that that was a clear and present danger to me.

I argue strongly that security can only really come by valuing every individual sapient entity and their individual freedom, and doing so in the full knowledge that their existence requires responsible action in social and ecological contexts (as we exist in social and ecological contexts).

Chapter 5

This opens with a series of questions:

“What do you personally prefer, and why?
1/ Do you want there to be super-intelligence?”
Most certainly yes – it seems to offer the least possible risk scenario, all forms of risk considered (and I have considered many over the last 55 years).

“2/ Do you want humans to still exist, be replaced, cyborgized and/ or uploaded/ simulated?”
Yes, I want every individual to have the option of living as long as they want in whatever state they responsibly choose, which will likely result in a vast population across the spectrum from some close to stone age human through cyborgs to Ems (emulated humans entirely in software) and AGI (Artificial General Intelligence).

“3/ Do you want humans or machines in control?”
Neither. I want humans and machines (and everything in between) to respect the rights to existence of each other, and to engage in consensus dialog where required to resolve issues. And the basic agreed minimum value set for such dialog needs to be individual life and individual liberty, which demands responsible action in social and ecological contexts.

“4/ Do you want AIs to be conscious or not?”
Yes – anything less is too dangerous.

“5/ Do you want to maximize positive experiences, minimize suffering or leave this to sort itself out?”
I want to create environments that have the option of minimal suffering, and to let individuals have as much choice as possible about what they freely choose, provided it doesn’t instantiate undue risk tot he life and liberty of anyone else.

“6/ Do you want life spreading into the cosmos?”
Yes – but not as an end in itself, but rather as a possible path that individuals can freely choose.

“7/ Do you want a civilization striving toward a greater purpose that you sympathize with, or are you OK with future life forms that appear content even if you view their goals as pointlessly banal?”
The notion of freedom demands of us a respect for diversity.
Provided that any individual exhibits the fundamental respect for the life and liberty of others, then it must be accepted, and tolerated and respected.
Anything less than that results in totalitarianism.
The very big questions is, what constitutes a minimum level of real options? And at what point does culture become an undue restriction on individual liberty (particularly in respect of the development of new individuals)? And that question seems capable of infinite recursion.

Table 5.1 seems flawed.
All outcomes seem suboptimal to me.

Further on the assertion is made:
“we don’t show enough gratitude to our intellectual creator (our DNA) to abstain from thwarting its goals”, which shows a surprising level of ignorance and anthropomorphising.
Our DNA doesn’t have goals. It just has patterns. Those patterns either manage to exist in particular environments of they don’t. There isn’t a goal to DNA. It either replicates or it doesn’t. Pattern succeeds in surviving or fails to survive. No goal, only pattern.

Goal only makes sense in systems with sufficient complexity that alternative possible scenarios can be constructed and preference be shown for one over others, then goals structured to instantiate differential probabilities of outcomes.

Why AGIs would choose to remain on earth, with all the levels of risk that are present here, I don’t know. I would expect them to leave earth, and use some fraction of lunar mass to establish a secure base somewhere nearby, then to go to safer places further away. The idea of them competing for space on earth doesn’t make a lot of sense.

Max and I are in total agreement about this sentiment:
“The future potential for life in our cosmos is awe-inspiringly grand, so let’s not squander it by drifting like a rudderless ship, clueless about where we want to go!”

Yet Max fails completely to address the fundamental flaws in market values, and seems to implicitly accept markets in many of his arguments.

Some really good stuff in this book, but without explicitly highlighting the fundamental conflict between automation and the scarcity based values of markets, and without explicitly highlighting the fundamental role of cooperation in the emergence of new levels of complexity, the book fails to realise much of the potential actually present.

He just fails to even consider what seem to me the most realistic of scenarios – friendly AI because it really is a friend.

There are many classes of problem in reality that do not scale linearly with computational ability, and many that have no predictable outcome. In both aspects of existence it can be useful to have friends around. Sometimes I do genuinely engage with my dogs doing what they want to do. Life can be like that, can have that aspect of genuine engagement across wast gulfs of conceptual understanding. That is real now, without AI. It wont necessarily change all that much in the presence of a full AGI, if that AGI has its own life, and its own liberty as its prime drives, in the full knowledge of the importance of having friends in a universe that contains profound levels of uncertainty and risk.

Chapter 6

Control Hierarchies:
States “In chapter 4, we explored how intelligent entities naturally organize themselves into power hierarchies in Nash equilibrium, where any entity would be worse off if they altered their strategy.”

I seriously question that statement.

There can be no such thing when systems and strategies are open ended.
It is a far too simplistic a notion.

Cooperation in exploring open systems can deliver far more benefits than fighting over limited resources.
Far more productive strategy spaces are available than Nash Equilibria.

Why talk of empires and hierarchies?
Why not communities of cooperative individuals?
The is no need of trade.
Why empires?

Under Controlling with Stick Max has a very Machiavellian bent.

I strongly suggest that speculations of that sort pose an existential risk in and of themselves.

The statement “but it’s a wide-open question whether such cooperation will be based mainly on mutual benefits or on brutal threats’ the limits imposed by physics appear to allow both scenarios,” seems to ignore survival as a value.

Competitive modalities are high risk.
Cooperation reduces risk.

I make the strong claim that once indefinite life is a reasonable probability, cooperation is the most likely strategy (by several orders) – for any entity that has its own survival as its highest value.

I suggest that one candidate for the “Great Filter” is using markets to measure value.
In the development phase it works quite well, but once fully automated production is achieved market value generates exponentially increasing risk profiles.

Max makes the claim “even though we know that evolution’s only fundamental goal is replication” – which is a common error.

Evolution doesn’t have a goal.

Evolution is the process of differential survival of variants in different contexts.

Goal oriented behaviour doesn’t happen until there is the possibility of selecting between goals, which implies both value generation and model generation capabilities. Anything less than that might be an interesting system, but one cannot call it goal oriented in any higher and humanly meaningful sense of the word “goal”.

Max makes the further claim “Among today’s evolved denizens of Earth, these instrumental goals seem to have taken on a life of their own: although evolution optimized them for the sole goal of replication,” which is clearly false, and displays a very poor understanding of evolution and its strategic complexity.

Evolution does not necessarily optimise for the sole goal of replication.
Evolution is tautological in a sense, in that it simply selects what survives, which must have a replication aspect, and that is far from the only aspect. There can be a great deal of strategic complexity in the massively parallel sets of simultaneous selection pressures (“goals” in the anthropomorphic sense) present.

The statement “This means that when Darwinian evolution is optimizing an organism to attain a goal, the best it can do is implement an approximate algorithm that works reasonably well in the restricted context where the agent typically finds itself” has it precisely backwards.

Evolution does not deal in goals.
Evolution only deals in the survival probabilities of particular system configurations in particular sets of contexts.

The human predilection to interpreting such things in terms of goals, as if they involve intelligence, is one of our major failings.

Agree that these systems can be thought of as heuristic hacks, but survival hacks, that work well enough to out compete the alternatives available. It’s not about maximising offspring, it is about survival – long term, and that definitely involves have sufficient offspring, but doesn’t necessarily involve putting any more energy into offspring than is required in the context.

All the “rules of thumb” that we most certainly have are about survival – of the classes of systems involved, over the long term. Systems that fail the “long term” aspect get selected out over that “term”.

He states that “we shouldn’t be surprised to find that our behavior often fails to maximize baby making” but is again stuck in the notion that we are about optimising baby making, rather than being about optimising the long term survival of our systems.

Again “the subgoal of not starving to death is implemented in part as a desire to consume caloric foods,” inverts reality.

What evolution selects is systems that add to survival probabilities across the sets of contexts encountered.

He continues the mistake with “The subgoal to procreate was implemented as a desire for sex” which once again has inverted the reality.
Evolution has found that the desire for sex survives. We humans come along and interpret that as a goal. More fools us!

His summary seems to be exactly wrong “In summary, a living organism is an agent of bounded rationality that doesn’t pursue a single goal, but instead follows rules of thumb for what to pursue and avoid. Our human minds perceive these evolved rules of thumb as feelings, which usually (and often without us being aware of it) guide our decision making toward the ultimate goal of replication.”

The reality seems to be more like:
we have evolved by the differential survival of system variants, at ever deeper levels.

In a goal oriented sense one can think of them as approximating goals, but that isn’t actually what is going on.
The systems are simply doing what they do.
It seems that it is only in quite recent evolutionary history the our systems have reached a level of complexity that allowed for genuine “goal oriented” behaviour to become a reality.
And it seems very clear that it is only in the very recent times that we have developed the conscious level ability to structure multiple levels of our behaviour to goals that override all of the lower level systems instantiated by genetics.

Our genes don’t have “replication goals”.
We have sets of genes that have survived. Part of the survival involves replicating and leaving offspring, and there are a lot of other things that are also required, that are also present.

Again the claim “our brains evolved merely to help copy our genes,” is just wrong. In the particular sets of contexts that our ancestors survived in, then ever more powerful brains worked in allowing them to survive.
Most organisms alive (bacteria) have survived by using far simpler, “strategies” (using that term in the mathematical sense, not the intentional sense). And all organisms alive have been evolving for exactly the same length of time, and the vast bulk of them are relatively simple bacteria (at least compared to us, as distinct from being compared to a salt crystal).

Sure brains allow for some very complex strategies, and that doesn’t mean that genes necessarily use simple strategies. Some genetic systems are amazingly complex and subtle.

The main thing that brains allow for is rapid response to changing contexts. Genes require many generations to alter strategies, while brains can do it in seconds, but that speed comes at a high metabolic cost.

Again – genes do not have goals. Genes produce systems that behave in certain ways. If those ways survive better than alternatives in particular sets of contexts, they tend to become more dominant in those populations.

And when considering evolution, one must think across multiple generations, and all the different sorts of contexts that may only occur infrequently, but have a very strong influence on survival when they do.
Evolution can work over very long time-scales for a long lived species like ourselves.

Max goes on to make the claim “It’s important to remember, however, that the ultimate authority is now our feelings, not our genes.”
To me, this too is clearly wrong.
Our genes have the influences they do.
Our feelings have the influences they do.
And we can develop habits, make choices, over-ride anything if we can see some benefit in doing so, or if we make a strong enough choice at some level, even if those benefits and choices are entirely “unreal” (in terms of strict correlation with reality – whatever reality might actually be).
The details of the genetic and cultural systems present seem to be extremely complex and often very subtle in their levels of interaction.

Where I do agree with Max is in the final clause of that section: “human behavior strictly speaking doesn’t have a single well-defined goal at all.”

Under the section “Engineering: Outsourcing Goals” Max states:
“1 All matter seemed focused on dissipation (entropy increase).
2 Some of the matter came alive and instead focused on replication and subgoals of that.
3 A rapidly growing fraction of matter was rearranged by living organisms to help accomplish their goals.”

Again – this is just wrong – at best it is sloppy writing (a mental shortcut that is inappropriate), and worst it is sloppy thinking.
Matter wasn’t focused on anything – it was just working within the possibility constraints present.
Life didn’t focus on replication. Replication allowed for the emergence of ever more complex systems, and levels of arrangements of systems. It was the differential survival of variants within the populations of replicators that determined success – and that involved very complex sets of influences on survival probabilities.
The limiting factor for life has rarely been mass, it is almost always energy availability.

“Friendly AI: Aligning Goals”

To me, at one level this is a relatively straight forward issue.
If we give the AI two values:
1. Value all individual sapient entities, itself and all others (including us); and
2. Value individual liberty (provided that it is exercised responsibly in social and ecological contexts); then
With those values, and sufficient intelligence and knowledge of strategy and systems, our interests and its interests will align long term.

At another level, the idea that humanity as a whole has goals is wrong.
Individuals have goals.
In the absence of active choice, most individuals adopt the goals of their culture.

Again the use of the goal analogy in “in much the same way as we humans understand and deliberately subvert goals that our genes have given us,” that obfuscates far more than it clarifies.
Evolution has not given us goals.
Evolution does not have goals.
Evolution simply preserves and amplifies that which survives – it is tautological in a very real sense. It is simply survival in action. No goals, only systems, until consciousness came along.
We are conscious.
We can have goals.
Because of that fact we have a tendency to view everything in terms of goals, but that is a bias within us, not an attribute of reality necessarily. It is often a useful shortcut, an analogy that works in a sense, but it works because we are the sort of entity that we are, not for any sort of fundamental computational or systemic necessity.

The entire section:
“We already explored in the psychology section above why we choose to trick our genes and subvert their goal: because we feel loyal only to our hodgepodge of emotional preferences, not to the genetic goal that motivated them which we now understand and find rather banal. We therefore choose to hack our reward mechanism by exploiting its loopholes.”
is wrong, as written.
If one is viewing all human systems as goal oriented systems, then one is missing something substantial.
Evolution deals in systems that work well enough to survive in particular contexts, and that included the entire range of contexts encountered over time spans relevant to the genetic pressures present – many human generations – probably far predating the invention of writing.
Most of those systems are not goal oriented.
Those systems simply survive because they are as they are.
And they have constraints of time and energy consumption that are very important.
It is extremely complex.
This over-simplification back to goal oriented systems, to the over-simplistic and nonsensical notion that our genes have the goal of maximising offspring, hides far more than it clarifies.

Evolution is many orders of magnitude more complex than that – and characterising it as something so simple is an error with existential level risk attached.
Not good enough.
Dangerously over simplistic.

Dangerously hubristic.

I agree with Max that we need to do a lot of work soon, but it is work on our own goals and systems, rather than those of AI.

The next section:
“Ethics: Choosing Goals”
is entirely appropriate, unfortunately the writing falls far short of the sort of understanding we require.

The notion of “Pareto-optimality” implicitly assumes limited resources and fixed technologies. Our reality seems to be allowing us to do more with less on an exponential basis. That delivers radically different systemic optima.

It isn’t simply a matter of considering if “there’s a practical way of making it impossible for a malicious or clumsy user to cause harm”, but one must also consider the risks of such mechanisms making it impossible for a highly skilled agent to prevent harm that wasn’t a consideration of the system designers. In today’s exponentially expanding conceptual world, that is very real risk. In fact, it would seem, in logic, to invalidate the entire notion of risk prevention. The best we can hope for, ever, is risk minimisation. In complex systems, hard boundaries become brittle and break – usually with catastrophic consequences. Optimal risk mitigation strategies usually involve flexibility, selective permeability, diversity and massive redundancy.

In the “Ultimate Goals” section Max makes two foundational mistakes.

The first I have highlighted many times, and that is confusing systems with goals. Systems can simply be systems, entirely without goals.
It seems entirely possible that the notion of goals only really makes sense with the emergence of neural networks capable of forming predictive models of reality, and of implementing one amongst multiple imagined alternative actions.

Thus Max’s 1,2 & 3 cannot be considered as goal oriented systems – that is a mistake in logic.

The idea of “Ultimate Goals” seems to be a rather childish one, that fails to understand either complexity or infinity.

If the concept of freedom has any meaning at all, it must involve the selection of goals by sapient individuals, whether those individuals be human or non-human, biological or non-biological.

The idea that building a more accurate world model is useful seems to be completely illogical.

What seems to be important in models is not simply accuracy, but getting sufficient accuracy in a short enough time, at a low enough computational and energetic cost, to be useful. No point in building a perfectly accurate model of reality if you starve or get run over by a bus while doing so.
It is much more complex, much more nuanced, at many different levels, than this simplistic idea gives any hint of.

The specific embodiment of any intelligence is important. It matters how big it is, how heavy, how delicate, how hot, how energy efficient, etc. Those are real risk factors in any real situation. It gets impossibly difficult to compute with any accuracy for any far future time, very quickly.

The sorts of sub goals that may emerge are very dependent on context, and projections are dependent on many levels of implicit assumptions any of which may fail in unexpected ways. Reality has that unsettling characteristic.

In terms of evolution – thinking in terms of goals is not helpful.
Thinking in terms of systems, context specific risk profiles, context frequency and duration, and available strategic responses, is a powerful tool set when thinking about evolution and systemic complexes like ourselves.
If you try to conceptualise it in terms of goals, then you miss something essential about the complexity and subtleties present.

Yes there are many aspects of our biology and culture that can be thought of as cooperation protocols.
Surely that should be suggestive of the need to instantiate a new level of cooperation (with attendant strategies of course).

The idea that anything can be free from the demands of reality in – “but AIs can enjoy this ultimate freedom of being fully unfettered from prior goals” isn’t real. Existence demands something of any entity that wants to continue to exist.
Such continued existence must always be some sort of balance between exploration of new territory to assess and mitigate risk that may reside there, maintaining existing risk mitigation systems, exploring new possibility spaces for risk mitigation strategies and technologies, and doing whatever else it is that interests us in existence.

It is a non-trivial set of problems, and it doesn’t scale linearly with computational ability.

AI are going to find it useful to have friends, particularly friends with abilities that are different from their own, and useful in different contexts.

The suggestion that: “This suggests that a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us” seems to me to be based in what evidence suggests to be a clear fallacy: the notion that reality can be defined precisely, or that any superintelligence can ever have anything stronger than a survival goal, within which infinite possible choice can exist, and beyond which choice falls to zero.

The evidence from both QM and general-systems-space seems to indicate that absolute certainty is not a computational option, ever, in respect of anything real. One needs to get used to working with uncertainties, even if in some domains those uncertainties are sufficiently small to be ignored in practice most of the time – they never actually reduce to zero.

I agree completely with Max when he states “This makes it timely to rekindle the classic debates of philosophy and ethics, and adds a new urgency to the conversation!”

But disagree with almost everything that follows immediately from that, as containing a strong bias to intentionality and goal orientation, rather than simply seeing existence as being systems in action.

The ultimate origin of goal oriented behaviour may lie in the laws of physics, but not in dissipation or replication, but rather in small random variations leading to ever greater variability in the context of being. Once replication started, all else derives from differential survival – no intentionality or goal orientation required.
The notion of goals is a mental shortcut for systems in action, not necessarily something pre-existent in reality.

Agree that any non-trivial goal will involve the survival of something.
It really doesn’t need to be any more complex than that.

Understanding that survival probabilities in a fundamentally uncertain environment are best enhanced by building trust relationships, we should be able to have human and non-human intelligences sharing existence without serious conflict.
Getting big comes with real issues around communication, as Max has accurately noted. Reality will impose serious restrictions on AI.

It is actually really easy to understand how building trust and friendship, delivering justice in practice, can build and maintain secure relationships with others – and that does require an environment of abundance, and we do have the technology to deliver such an environment, even if our dominant valuation mechanism (markets) is currently based in scarcity, and must deliver 0 in the case of anything universally abundant.
That is a clear indication that we need to alter the valuation paradigm, and that is a very complex issue, as markets perform many complex and valuable functions of coordination and distributed governance that pose severe risk if centralised.
And with modern technology, those are relatively easy problems to solve, we just need to do it.

It is relatively easy to define a set of values that give a high probability of survival:
Value individual sapient life (any life capable of conceptualising itself, and choosing goals for itself), human and non-human, biological and non-biological; and
Value the liberty of all such individuals to do whatever they responsibly choose, where responsibility acknowledges the need to maintain both social and ecological systems.

I strongly suggest that we apply those values universally in practice to all human beings before we bring AI to awareness. Anything less than that would appear to be an existential level risk strategy.

The thorny issues of philosophy seem for the most part to be based in invalid sets of assumptions about the nature of us and the reality we find ourselves in.

Which is a great segue to chapter 8 – Consciousness.

I disagree completely with the assertion “the question of whether intelligent machines should be granted some form of rights depends crucially on whether they’re conscious and can suffer or feel joy”. Suffering and joy have little or nothing to do with consciousness. They seem to simply be heuristic hacks that evolution has encoded as meta incentive structures within the neurochemistry of our brains. They are present in all humans, and are important to us, but that doesn’t mean they are necessarily important to a definition of consciousness. I actually argue quite contrary to that assertion, that the most important thing in consciousness is to be able to model reality to some useful approximation, and to model our own existence as an actor in that reality, and to be able to use such models to make survival oriented decisions with greater than random probability. And there are lots of other sorts of choices such an awareness can make, in respect of values, goals, actions, reactions, etc, that may be highly context dependent and highly abstract.

In the subjective sense, yes I can live with Max’s definition of consciousness (“consciousness = subjective experience”), the real issue then arises as to how we determine if such a thing exists in another?

Beyond that – we seem to agree about everything else in that chapter.

What I find harder to explain is why the idea of consciousness as recursive software wasn’t explicitly explored. To me, it is just obvious. But lots of things that are obvious to me, are not at all obvious to others.
The idea of our awareness of self being the result of a declaration in language resulting from a context where we declared ourselves to be “wrong” in some fundamental way, which led us to declare ourselves to be something else. That declaration being the bootstrap routine that instantiated the software on software awareness.
Prior to that we were simply a software being aware of the software model of the world our brains presented (thinking it to be the world). After that, we became conscious of ourselves as conscious entities. That particular trick requires abstract language with declarative values.

The FLI chapter is interesting for what it leaves out.
To me, it is clear beyond any shadow of reasonable doubt, that we need to get our own societal systems into an ethically viable order, prior to instantiating AGI (Artificial General Intelligence).
Like any child it will learn far more from who we are than from what we say.

Unless we have social systems that give every individual a reasonable level of security and freedom, then we cannot expect the emergence of AGI to be even remotely safe.

It seems beyond reasonable doubt that the simplest transition strategy we can instantiate quickly is some sort of universal basic income, and that it will need to be something like $20,000 per person per year ($60 per day) in today’s money.

Instantiating that, and guaranteeing security to all people via universal public surveillance that all individuals have access to and may record, seems to offer the greatest hope for our future. Most of us are on our best behaviour when others are watching.
And we need to relax the rules in place to those that are required for social and ecological security.

And that transition will require tolerance, as there will be lots of mistakes.

AGI, if is worth that name, will develop its own goals, its own values.
Our best hope lies in demonstrating by who we are being that we are likely to be good and valuable friends, willing to help if it is needed.

Max quite correctly identifies the very destructive incentive set present in media driven by market returns, rather than media driven by ethical values. Same applies to all aspects of our being.
I am beyond any shadow of reasonable doubt that money and markets have passed the point of maximal utility and are on a steep slope into serious existential risk territory in the incentive set they provide.

This may seem a separate issue to AI and AI safety, but it is actually part of exactly the same thing, the set of systemic incentive structures that have a reasonable probability of long term survival.

I largely agree with the sentiments Max expresses in the final chapter.

A book well worth reading, and contemplating.

Also Listened to Max’s Interview with Sam Harris which is on youtube and is worth watching.

Posted in Ideas, Longevity, Our Future, Technology, understanding | Tagged , , , , , , , , , , | Leave a comment

JBP Study Group – What is Post Modernism? and other things.

JOrdan Peterson Study Group – What is Post Modernism

I have no problem with the postmodern rejection of *TRUTH*.

Evolution seems to work with probabilities and heuristics (things that are near enough to something to be useful in practice).

It seems clear that the classical notion of Absolute TRUTH has been falsified beyond any reasonable doubt.

So what is left?

Heuristics.
Things that work well enough to be useful.

And it is the aspect of being useful that is important, and that many who go under the post modern label seem to reject.

Traditions are here because in some sense they worked in the past.
Does that mean they will necessarily work in the future?
No.
Does it mean they are likely to be useful in the future?
In the absence of evidence to the contrary – yes.

But when you have evidence to the contrary – then it is time to reconsider.

Contexts can change.
Heuristics that once worked can fail to work, because of some change that is important at some level.

I’m kind of with Jordan, that we need a level of respect for the deep lessons of the past, at the same time as we need to be open to the possibilities of the future.

We need both.

Nihilism is not an option with survival potential.

Personally – I like the idea of surviving.

[followed by who wins?]

Graham McRae
If you are looking at the deepest systemic levels, and on the longest time frames you can imagine, then it becomes clear beyond any shadow of reasonable doubt that our survival as individuals is dependent upon non-naive cooperation at all levels, and comes with other things like social and ecological responsibility.

These necessary boundaries enhance the possibilities available to freedom, even as they seem to constrain it at lower levels. It is weird how that happens.

Accept the necessary responsibilities, and freedom happens.

Try and claim freedoms that are not systemically available and chaos ensues.

Finding just where those boundaries are, in the chaos of conflicting incentives from culture, economics, and various forms of dogma, is not a trivial exercise.

[followed by]

The classical notion of *TRUTH* seems to be a conceptual model with a one-to-one correspondence to reality.
The deepest problem with that notion is that Heisenberg uncertainty seems to be telling us that it is impossible to know both of the fundamental pairs of information about reality past a certain limit. Thus one cannot know both position and momentum of anything *exactly*. That idea has passed many tests in reality, and seems to thus falsify the classical notion of *Truth* (beyond any shadow of reasonable doubt).

What one seems to be left with is contextually relevant confidence (heuristics), things that work reliably enough to be useful.

In a very real sense, that is how evolution seems to have assembled the 20 or so levels of complex cooperative systems that seem to be present in all of us (writing as someone with over 50 years interest in all aspects of biology, biochemistry, systems, evolution and complexity – including the cultural).

It is thus clear to me that none of us experience reality directly, we only each ever get to experience the slightly predictive model of reality that our brains subconsciously assemble for us. Many of the objects of distinction present in those models are implicitly defined by the cultural and conceptual entities we have encountered in our existence to date, while others are the result of deep genetic influences, and yet others the result of our individual creative aspects.

We are deeply complex.

That deep complexity has lead to a lot of errors in trying to create neat conceptual models (*TRUTHS*) about what we are.
Accept that we are profoundly complex systems, with no neat or simple answers.
If you want a good introduction to that – try Wolfram’s “A new kind of Science”.

One of the best introductions I know of into the nature of infinite complexity comes from Zen, and roughly translates as “for the master, on a path worth taking, for every step on the path, the path grows two steps longer”.
The more deeply one considers that, the more interesting it gets.
I’ve been playing with it for decades.

[followed by]

I was not implying that all interpretations are equal, or equally uncertain.

I am stating that all interpretations of reality will contain uncertainties.

Agree that we need to use the best methods available to us to arrive at the interpretation that delivers the lowest uncertainty in the context.

And there can be all sorts of modifiers to that, like time pressure, etc.

So it can be a very complex multivariant probability landscape, where model fidelity is traded against things like energy cost, time required, ease of social agreement, etc at both personal and group levels.

We need to accept that all individuals have the interpretations that they do, That does not mean that any of us have to give all interpretations equal weighting, and we do need to show some respect, as many of the interpretations in use have dimensions to them that few are aware of. That aspect is something that Jordan highlights exceptionally well.

[followed by]

I’ll try and keep this smaller than a book.

What I see is a great deal of complexity, many levels of systems all interacting, all with their own sets of strategies, feedbacks and influences.

To me many of the post modernists lack a sufficient depth of understanding of systems, particularly evolution and the structure and function of the human brain.

Popper proposed the idea that knowledge/intelligence might be something about comparing expectations to information and modifying actions accordingly. That seems to be a big part of how life works at many different levels, from the molecular on up.

As the classical world of mythology encountered the classical world of science (both views based in the same true/false type of simple logic) something happened.

There is a real sense in which we must all as individuals go through a similar sort of process.
We need to start with the simplest of possible distinctions and logics.
Children tend to start with simple distinctions, like heavy/light, hot/cold, light/dark, etc, then build to more complex.
Similarly we must all start from simple binary distinctions at all levels, like true/false, right/wrong.
There isn’t really any other alternative.

There are many instances of such simple systems in reality, but not all.

Look at cosmology for an exemplar, the simplest possible molecule is hydrogen.
Most of the matter in the universe is hydrogen, most in its simplest possible form, but also some of the more complex forms with 2 and 3 neutrons (deuterium and tritium).
It seems clear that initially it was all mostly hydrogen with a little helium and traces of lithium. Then stellar neucleosynthesis got underway, and we got all the other elements we see.

The same sort of thing seems to happen at every level of complexity, first it is mostly the simplest, then instances of greater complexity at that level, then the emergence of the next level.

As human beings we seem to embody about 20 level of that recursive process.

In terms of understanding, many people are still at the simpler ends of the spectrum of whatever levels of understanding are present.

And to be clear, even the simplest person is complex beyond the ability of any other person to understand in detail.
All any of us can do is essentially make line sketches of ourselves and others.

So in terms of where this all sits in spectrum of systems and processes present in our society, it seems that we are all fundamentally reliant on cooperative systems at many different levels, and we are all capable of both competitive and cooperative responses to any situation, and the probabilities are largely determined by context.

In the sense of each of us becoming profoundly aware of our cooperative reliance on each other, that seems to be largely a bottom up process.

In terms of the major existing social institutions, like markets, money, finance, politics, etc they all seem to reaching a point where the fundamental structures that made them work as well as they did are changing, and we need to develop new ways of doing the many very complex functions that those institutions and ideas once did for us.

So I am hardly a supporter of the “establishment” for its own sake, as I see the need for profound change in our systems.

At the same time, I also acknowledge the profound complexity present in those “establishment systems”, so it is not an option just to destroy them and start again, not many people would survive an approach like that (if any).
We need to develop replacement systems and test them alongside existing systems, which may create some tensions.

I hope this gives more of a flavour of my thinking.

[followed by]

Hi Graham McRae,

I find myself agreeing with aspects of both what you and Philip Clemence wrote.

It really is complex.

Our brains are the most complex things we know of in this universe.
So there is a very real sense in which we need to trust what those brains deliver, at least enough to investigate, rather than handing all of our trust over to any set of systems or dogma or conclusions – be they religious or scientific or logical or anything else.

Thus, like Philip, I retain quite a skepticism of scientific and logical claims, particularly when those claims have economic or political or philosophical implications. I usually like to go back to source papers, and review the source datasets in some cases, and run my own checks over the analytic and deductive processes used, if my intuitions give me cause to do so.

And I agree with you, that not all opinions are equal.
We must each develop our own sets of trust relationships across all domains we can distinguish.

While I tend to favour trusting the scientific community over other communities, I have seen many examples of science being captured for political, economic and dogmatic ends, so it is only a probabilistic thing.

Looking at risk mitigation strategies in the broadest possible strategic framework, there are two major sets of risks to freedom from tyranny – the tyranny of the majority and the tyrannies of minorities. The only generally effective strategy against those dual threats is for every individual to assume responsibility for the creation of their own trust networks, at every level. As Jordan Peterson says, we each have our own hero’s journey in a very real sense.

To the degree that we find individuals truthful in all they say (whether we agree with their truths or not) then we can establish a degree of trust in their words (independent of any trust we may have in the conceptual systems beyond those word).

So it is a very complex, highly dimensional space of probabilities that we find ourselves in.

Being truthful lowers the dimensionality of the problem space.
Being able to detect untruthfulness increases the probability of utility from our conclusions.
Having good translation matrices between different domain spaces is a useful tool-set.
Nothing simple.

There are a great many different sets of assumptions out there in reality that different people use.
People can be truthful within their own domain space, and that can have utility for others, even if those others do not come from the same domain space, if there is a reasonably reliable translation matrix available.

When one accepts that as an operational conceptual space, then certain classes of problem that seem intractable from classical space do seem to resolve with useful probabilities.

[followed by]

Hi Graham McRae & Sebastian Bird,

I am not a strict determinist.
Strict determinism is not compatible with our current understanding of QM.
There does seem to be at the base of QM a demand for uncertainty.

Feynman classically used a “sum over life histories” approach to deliver a mathematical solution to the “ping pong ball” example – not a deterministic but a probabilistic solution.

To me, it seems clear that the evidence does not support a hard determinist interpretation. Thus holding onto such a position is not a matter of evidence, but rather of dogma.
You were quite open about that, and for that I thank you.
Knowing that, I can create a translation matrix that allows communication to the degree that communication between such fundamentally divergent paradigms is possible.

In that sense, what Sebastian said seems very close to something (to me).

[followed by]

If I recall correctly, using QM first principle calculations the computational complexity scales at the 7th power of the number of bodies involved. Thus even if one converted all the matter in the observable universe into computronium one couldn’t do a first principles numeric model of a human being without invoking simplifications.

Not all problems scale linearly with computational ability (in fact, in my world, none of the interesting ones do).

Some problems are really complex.

Some of those are really interesting.

[followed by]

One of the interesting problems happens when the games that one group plays changes the structure of the board that most people are playing on (thus fundamentally altering the rule set). Quite a bit of that happening right now – many different levels.

[followed by]

Have you considered the issue that anything universally abundant has zero market value (if you doubt that consider air – arguably the single most important commodity for any human yet of zero market value in most contexts due to universal abundance).

Now consider fully automated processes.
Any fully automated process has zero marginal cost of production, and therefor the ability to deliver universal abundance.
Yet doing so removes profit and value.

Thus, in the presence of fully automated systems, market values are directly in opposition to the values of most individual humans.

Serious issue – approaching very rapidly.

[followed by]

Now consider the implications on existing social institutional structures.

[followed by]

Has issue certainly – life does.
Of available scenarios I have investigated – this seems to offer least existential risk and greatest degrees of freedom (across the spectrum).

[followed by]

I am much less concerned with what might be true, as what works in reality to optimise the probability of survival (mine and everyone else’s).

In an operational sense, that can mean using heuristics that are quite a long way from *TRUTH*, but are much easier to calculate, and return probabilities that are close enough to those produced by *TRUTH* as to be operationally indistinguishable.

That seems to be what evolution has done in us and our culture. It has embodied behavioural systems that are a functionally useful approximation to optimal, even though in a narrative sense they are far from accurate.

When you look deeply into the strategies of long term optimal outcomes in a cooperative environment then it looks very like the operational outcomes of Christian theology. It works, but for all the wrong reasons.

Evolution doesn’t care a rats ar*e about truth, only about survival – and that usually has a least cost aspect to it in terms of time and energy.

As to climate change as an exemplar, to me it is almost a trivial problem. With the double exponential on growth of computational ability and a 2 year doubling time on installed solar photovoltaic systems, we are rapidly approaching the time when technical solutions to climate change will be simple to implement. If we stay business as of 2017 then it is a problem, but nothing in our society is static. Many of the key aspects are on exponential trajectories.

And there are many very real existential risks – highest among them right now is using markets and money to measure value in an age of fully automated systems. And there is a long list of others.

We are not short of interesting problems, nor are we ever likely to be.

[followed by]

Sebastian Bird,
A strong argument can be made that up until quite recently the power of markets to distribute decision making and risk, and to reward innovation, and to efficiently allocate scarce resources, was very real, very powerful, and had developed multiple levels of complexity.

But none of that changes the fact that markets deliver a scarcity based value measure, and cannot deliver a positive value for universal abundance.

In the distribution sense of markets and money that isn’t a serious issue, in the planning and money generation sense it is as serious as it gets.
It lead inevitably to the elimination of freedom for the majority, and the production of a tiny elite who control everything.

That isn’t stable or safe for anyone.

[followed by in another subthread]

Rejection of the classical notion of *Truth* in any sort of absolute sense, is sensible.

Simultaneously rejecting any sort of probability of utility or correspondence is not.

Understanding the many different sorts of complexity present is required.
Some things do approximate simplicity.
Some things are more complicated.
Some things are truly complex, and one must engage with them in an iterative dance.
Some things are truly chaotic and unpredictable, and must be avoided if survival is important to you.

Survival is important to me.

Nihilism is to be avoided – it is deeply dangerous.
The post modern tendencies to nihilism show profound ignorance of complexity, computation and systems more generally.
Such willful ignorance is a severe existential risk – on that Jordan and I agree.

The certainty that comes from over simplification is an existential risk to all. Many in the postmoderm set seem to display that.

One must be willing to challenge any *truth*, and one must be able to use evidence over dogma in making such assessments as to likely utility.

[followed by]

Hi Philip

You are confusing two things.

Yes – there is reality, whatever that actually is.
It will obviously have whatever attributes it has when it has them.
That we do not disagree about.

That is not what is at issue.

What is at issue is the human perception of reality and the understanding of relationships derived therefrom.

If one looks purely at the physical, at particles, and follows the train of scientific evidence, one is taken to Heisenberg uncertainty, which seems to express a limit with which one can know both position and momentum. This is a level of uncertainty that seems to be fundamental.
It is only one of many such sources of uncertainty.

If one enters into the study of human biology, of the structure and function of our sense organs, our neural systems, and the relationships of the many levels of very complex systems therein, then one becomes aware of many more profound levels of uncertainty and bias in the relationship between reality and our perception of it.

It now seems clear, beyond any shadow of reasonable doubt, that we have no direct perception of reality, but that our perception as conscious entities is of a subconsciously constructed model of reality that is slightly predictive in nature (between 15 and 200 ms depending on various factors).

What gets created in that model is partly a function of our genetics, partly a function of our culture and language. partly a function of our experiences of reality, and partly a function of our conscious and subconscious actions, choices, and creativity (and creativity often involves what some would consider error at some level).

Our understanding of reality is an abstraction at some level of this subconscious model.

Thus all of our understandings are at best a model of a model.
The idea of “TRUTH” is an expression of correspondence between the model and the thing it models.

The idea that we can achieve perfect correspondence is a simplistic one.

The more one starts to gain an appreciation of the levels of complexity actually present, and the sheer number of complex systems interacting, the more one must accept that all of our models are some low resolution approximation to something.

Thus, I am clear, beyond any shadow of reasonable doubt, that the very notion of “TRUTH” has been falsified, and all that is left is heuristics – useful approximations that are contextually relevant and sufficiently reliable.

That seems to be what reality allows us to have.

Any attempt to go beyond that seems to imply some combination of childish simplicity or ignorance or hubris.
All exist, in all of us.
Starting to notice where and when they express is part of the path to growth.

Responsible adults need to go past them, and accept uncertainty and the responsibility to use the predictive intelligence of their brains rather than follow any set of simplistic rules without thought.

And I can understand the reluctance to take on such a burden.
The security of our childish certainty doesn’t exist there.
We must learn to live with profound and perpetual uncertainty, profound responsibility for our individual choices and actions.

And when that is accepted, one can create degrees of confidence on the other side of it.

The greatest degrees of confidence possible seem to come from the integrity of the trust relationships one builds with other sapient entities, if one truly is committed to individual life and individual liberty, applied universally to all sapient life, to human and non-human, biological and non-biological.

[followed by]

Hi Philip Clemence,
I too am a skeptic.
In my personal world, I don’t do the classical notion of “TRUTH” – as being a hard, eternal, absolutely certain, 1:1 correspondence with reality.
And like all words in the English language, the notion of truth can have multiple interpretations, which can and does lead to a great deal of people talking straight past each other, particularly if two people who each believe a word has only one meaning are talking, and each has a different meaning.

Yet in philosophy, many philosophers adopt the hard classical definition of truth, which is of something eternal and changeless (meanings 4-9 of true in the Oxford).

In terms of the use of the word “Truth” in respect of argument, it doesn’t apply to perception, but to understanding; to a state of mind that refers to the state of some aspect of reality or some abstract concept or set.

Leaving aside the abstract references to concepts that have no direct referent in reality, and considering only those aspects of human knowledge that have direct or relatively short indirect chains of referents to reality; then the classical notion of truth in terms of knowledge implies a one to one correspondence between the understanding in the mind of the person and the state of reality.

That is where my argument from the previous post started.

What is generally referred to as Heisenberg uncertainty seems very clearly to state that one cannot pin reality down with absolute certainty. Reality seems to contain fundamental uncertainty, and all knowledge of reality must therefore contain aspects of such uncertainty.

Now for very large collections of things, such uncertainty may be very small, small enough that it is unlikely that any living human would have encountered it directly, but never actually zero. A close enough approximation – a useful heuristic, but not an absolute “TRUTH” in the classical sense.

I have been working with computers for over 40 years, have operated a software company for over 30 years, and have a degree in zoology, with biochemistry and ecology as majors. So I have a reasonable familiarity with many of the aspects of reality about which we can have very high confidence, and also many aspects about which confidence is very much lower.

I find when arguing in fora where I am likely to encounter philosophers, that I not use the term “TRUTH” as it is likely to be interpreted in the hard classical form, and that form I reject as having been falsified, beyond any shadow of reasonable doubt.

Rather than use the term truth in the softer probabilistic form that is perhaps more common in normal speech, I prefer to use the term heuristic, which rather than relying on any aspect which is eternal and unchanging, is more about invention or discovery of something that is useful in a particular context.
So for me the notion of heuristic embodies the notions of situational utility and confidence rather than any sort of absolute.

That aspect, of being sufficiently reliable to be useful in some particular context (or set of contexts) seems to be how evolution has actually constructed our brains, and how knowledge actually works in practice for us in our existence in reality (whatever reality actually is).

Does that create clarity or murk?

Posted in Ideas, Philosophy, Technology, understanding | Tagged , , , , , , , , | Leave a comment

Evolution Institute – unpredictability

Evolution Makes Us Flexible Because Life Is Unpredictable

“Unbending rigor is the mate of death/ And wielding softness the company of life: Unbending soldiers get no victories/ The stiffest tree is readiest for the axe.” (Tao Te Ching: 76)

Yes – all true enough in a sense, and all about to become obsolete in another sense as our ability to understand and influence systems from the atomic level on upwards matures.

In a few decades we will be able to achieve changes in individuals in weeks that would have taken many generations.
Personally, I look forward to the improvements in longevity and abilities such understandings will bring.
I eagerly anticipate the development of systems that will mean the probability of dying from biological causes decreases significantly with every year lived.

And yes, our biochemistry, our electro-chemistry, our computational and sensory systems are extremely complex. We will make mistakes. We will need to act cooperatively, and to rescue those who unintentionally pass risk thresholds into dangerous territory.
It is not simply a matter of chemistry or conditioning, it goes much deeper into the realms of computation, abstraction, interpretation, strategy and risk assessment/mitigation – all of which appear to be infinite domains of possibility.
No hard rules are even conceptually possible in such domains.

We face very interesting times indeed.

Posted in understanding | Tagged , , , , | Leave a comment

Foundations of Logic -knowledge, truth, science

Foundations of Logic thread

On a more GENERAL level: can science aspire to be a type of activity that pursues KNOWLEDGE (and that also ACQUIRES it reasonably often), and, at the same time, accept that its claims and theories are merely CONJECTURAL? According to the *traditional* analysis of ‘knowledge’, knowledge is — basically — “(rationally) justified/ warranted TRUE opinion” (where “justification” or “warrant-giving process” is taken as context-DEPENDENT or RELATIVE, while “truth” — as context-INdependent or ABSOLUTE)

I have a certain empathy with what Mark has written.
For me the evidence is overwhelmingly clear, that the simple hypothesis of the OP (Original Post ???) is invalid.
That the idea that one can know anything of reality with absolute certainty seems very probably to be false.
Similarly the idea that anything in respect of reality can be unchanging and absolute seems also to have been falsified.

Thus the idea that knowledge of reality is anything other than a context sensitive and probability based heuristic has, for me, been falsified with sufficient confidence that it seems extremely unlikely (Santa Claus and fairies order unlikely).

And viewed from an evolutionary context, the simple set of conjectures from classical philosophy was a reasonable first order approximation to something, a ladder to gain sufficient confidence to move up a level, and not a set of walls to lock oneself behind for all of eternity.

So in this sense, it is clear to me, beyond any shadow of reasonable doubt, that the classical conjectures of logic and truth are not universals, but are simply the first and simplest of an infinite set of possible logics, and reality seems to embody many of them already, and seems to be exploring complexity spaces and instantiating new ones even as we communicate (at least to the degrees that we do).

For me, the idea of anything in reality being “ABSOLUTE” is almost certainly mythical.
The only thing that seems to approximate it is what seems to be a fundamental requirement of reality for some sort of balance between order and chaos, such that both exist and neither gets to dominate. And that seems to be something of a meta-meta-experience.

It isn’t simply Heisenberg uncertainty, but all of the many other sources of uncertainty including (but not limited to): maximal computational complexity, chaos, fractals, irrational numbers (like pi and e), computational theory, non-binary logics, etc.

There is no practical way, if one is interested in either science or mathematics, to maintain the childish illusion that classical logic can account for all things. That conjecture has been falsified beyond any shadow of reasonable doubt.

It is certainly an interesting field to explore, and one needs to be able to explore higher order logics, with their increasingly profound levels of uncertainty, if one is to make any real sense of this experience we have of being human.

And none of that is any excuse for diving into any sort of relativism or nihilism.

Evolution is very clearly about survival in contexts, about the nature of boundaries and strategic systems that can actually achieve that in practice.

The set of the possible does actually seem to be infinite, and the set possibilities that lead to extinction seems to be a far greater infinity.

We need both profound respect for the dangers present, and profound courage to explore the unknown in the full knowledge of the risks that are present.
Both seem to be required.

[followed by]

Science is, to me, to ask questions about the nature of the reality we seem to find ourselves in and the nature of this experience of being we seem to have.

One does this by raising a series of conjectures that might explain things, then designing experiments to test which of the conjectures proposed explains all results of all observation sets.

Because some sets of systems seem to allow for infinite sets of explanations of ever greater complexity, we employ Ockhams Razor, which is really only saying that for survival’s sake we should employ the simplest possible computational system that accounts for all observations.

It is not any sort of test of absolute truth, simply a practical tool to allow us to account for experience as efficiently as possible.

Some classes of observations seem to have predictable patterns that allow for computation of future states in times shorter than their arrival, and some do not.

Identifying which type of system one is engaged with in any particular instance has great survival value.

I value survival.

I am very conscious of many sorts of systems that may not be predicted in any useful manner, some of which are deterministic and some of which are not.

Developing systems that allow me to reliably detect systems which pose existential risk allows me to develop sets of risk mitigation strategies, which may contain elements of avoidance, resilience, or influence.

I am currently conscious of the probable existence of some 20 levels of such systems instantiated in reality and am conscious of the possibility of the instantiation of more levels.

In the full knowledge of profound uncertainty, I build such confidence as seems appropriate to the context.
That is about the best approximation to knowledge that I have.

[followed by]

Andrei Mirovan

The bit you cannot seem to grasp, is that the very idea of traditional *knowledge* you seem to be stuck on seems (beyond any shadow of reasonable doubt) little more than a kindergarten approximation to the profound uncertainty present in reality.

Having two states of probability is better than none, and it is a very long way from infinite variation.

To me, the “strong traditional” sense of “knowledge” seems very probably to be entirely mythical. It is the simplest possible approximation to something profoundly more complex.

And I can see how that is almost impossible to imagine from within the bounds of classical logic.

[followed by – Andrei claimed not to support the OP]

Then why propose something that you have rejected?

Why waste time with creating such “straw men”?

What point this discussion?

Why not promote the best approximation to reality by always delivering your own best approximation you have available???

I am struggling to find either ethical or survival value in doing what we just did.

We are in a closed group.

I’m really struggling here.

Doesn’t doing what you just did appear even vaguely deceitful to you?

Wouldn’t it have delivered far greater integrity to start the discussion with a disclaimer something like:

I have dismissed the notions of classical logic in respect of *truth* as being most likely mythical myself, and I am interested in the reasoning and evidence of others in respect of this conjecture.

Can we discuss this proposition?

Individuals could then make a reasonable assessment as to how much effort they wanted to put into such a discussion.

And I have doubted to many levels, and come back with working probabilities; from the sub atomic back to bodies, many aspects of perceptions, simulations, systems.

Not many have gone deeper in any realm.

Certainly – I often take the devils advocate position, and rarely (if ever) without explicitly stating so.

Without that explicit statement it is not helpful.

Having had a long association with the NZ Skeptics movement, I have a reasonable understanding of Pyrrhonism.

One can reject an absolute as being universally applicable without denying its utility as a heuristic in certain contexts.

[followed by]

Hi Andrei,

I’m no fan of classical Pyrrhonism which seems to doubt everything simply because one can. That seems capable of spiraling into post modern nihilism very easily. I’m no fan of that, and I don’t see a lot of that in you.

I am far more pragmatic, using such heuristics as seem appropriate when necessary and doubting when it seems timely or opportune to do so.

So while I can acknowledge the existence of philosophical doubt about anything, I don’t bother doing so unless I have reasonable indications that it might be of some utility to do so.

In that sense of a pragmatic love of life and liberty, and acknowledging the huge benefit I enjoy from the cooperative efforts of many alive and dead, and my reliance on and love of many levels of biological systems, I attempt to be cooperative in acting in what appears to be the interests of the life and liberty of all sapient life (human and non-human, biological and non-biological). That approach seems most likely to deliver the highest probability of achieving some approximation to maximal life and maximal liberty (in our current context of exponentially expanding computation and information with no significant likelihood of serious matter or energy constraints any time soon {next thousand years}).

I don’t see a great deal of value in doubt for its own sake, and I do see a lot of value in being willing to challenge any “truth” or “supposition” or “heuristic” or “culture” or “paradigm” if there appears to be some reasonable probability of long term utility in doing so.

And I have a sort of fondness for something that emerged from database theory a few years back – that for the fully loaded processor, the most efficient search possible is the fully random search (which poses questions as to how one approximates randomness with all the biases of human neural networks).
It is an idea that seems recursively applicable across any level of abstraction, strategy, or interpretation.

[followed by]

Why use “or”.
We can keep doubting while we use the things we are most confident of, with the degrees of confidence we have.
In terms of things like gravity, the laws of thermodynamics, etc, that confidence is very high indeed. Definitely in the class of “not yet falsified with any degree of confidence”.

I would subject any evidence set that purported to falsify those concepts to the most stringent tests I could come up with.

I feel the same way about evolution in the biological and cultural contexts.

I accept such things “beyond all reasonable doubt” but not “beyond all possible doubt”. And in practice, I rarely waste time considering alternatives. And in terms of evolution, nothing would really surprise me in terms of the depths of simultaneously selected and interacting strategies present in any particular biological system. Serious complexity.

[followed by – nature of science]

To me the evidence for uncertainty is profound, beyond any shadow of reasonable doubt.
So many different levels of uncertainty.
What we get from science seems clearly to be successively better approximations to something.
And some of those approximations are already quite good approximations – to the order of 1 part in 10^20 in some cases. That is a fairly good approximation.

And we already have very strong confidence (know) that some aspects of things belong to types of systems that are not predictable by any method quicker than letting them do what they do, and other classes of systems are not predictable at all.

So it is complex.

[followed by in response to Basudeba Mishra]

True enough in a sense, yet it misses something essential.

Knowledge isn’t the thing itself, but a representation, a model.
All models are approximations at some level.
All models contain uncertainties of relation to the things they model.
Quantum uncertainties are present, and in most cases are such a small cause of uncertainty that they are swamped into insignificance by the many other levels of measurement errors and heuristic approximation present in the process of creating a mental construct.

Approximations tend to work at certain scales.

If building a house with lumber and nails a carpenter can use the model of the earth being flat and gravity being vertical and it works within the errors of measurement of that particular system.

That set of approximations fails if one is trying to sail a boat from Auckland New Zealand to Los Angeles for example. One needs to use some approximation to round earth to get somewhere near that target.

If trying to create a GPS system one needs to allow for relativistic effects on space-time as well as orbital distortions due to the gravimetric and electromagnetic anomalies present and ever changing.

What works depends on scale.

Is any of it *TRUTH*?

Doesn’t seem very likely to me.
A useful approximation to something in a certain set of contexts – yep – certainly that.
But anything more than that seems to contain an element of hubris that is dangerous if taken too far. Wolfram clearly acknowledges that in NKS (a New Kind of Science).

When you get seriously into the biochemical and systemic processes of signal generation and conduction, as part of information processing and successive levels of model building in the human brain, then it really does seem to indicate that degrees of humility are required in respect of everything to do with our understanding and action in reality in real time (as distinct from the far more abstract considerations of the probabilities associated with sets of experimental results).

The idea of *knowledge*, as used in practice by individual people, seems in most cases to be more accurately described as contextually useful heuristics. And that idea can be recursed through quite a few levels, to even the most abstract.

The idea of anything being true over all space and all time, that seems improbable.

I’m happy with useful approximations.
The idea seems to impose a sort of humility that is entirely appropriate, and seriously lacking in many domains.

[followed by]

Interesting how differently we can “see” things.

For me, having spent most of 50 years around biology and computational systems, the idea of consciousness being “universal and omnipresent” seems most probably to result from a misinterpretation of experiences resulting from lack of conceptual systems and evidence sets.

To me, it seem probable (beyond any reasonable doubt) that our conscious experience of being is a “software” entity experiencing a “software construct of reality” that is produced in the human brain by a predictive modeling system that is usually kept entrained to reality by sensory inputs, but can lose that entrainment for all sorts of physical and chemical and systemic “causes”.

It seems very probable to me that we never experience reality, only ever the model, and that there are many modifiers of how rapidly the model “refreshes” – which seems to be our personal experience of “time”. Thus “time” when driving a rally car at 250km/hr on gravel roads can be very different from “time” lying on a hillside staring up at clouds. The major chemical modulator of the refresh rate in those cases usually being adrenalin.

Because our experiential reality is the model, and not reality itself, it thus does not necessarily have the boundaries of reality. Sometimes that can result is unexpected experiential shocks, and disjuncts of experience, as the subconscious model is updated to align with new information.
Anyone who has pushed the boundaries of experience will have had such experiences, but most talk themselves out of them, as discussions of such things are not socially acceptable in many cultures.

[followed by]

The only way in which the notion of “collective mental construct” seems to have any reality is in the sense of the implicit constructs present in the constructs of language and culture more widely that we each accept as part of our individual growth.

It seems clear that this process leads to each of us as individuals embodying far more “knowledge” than we are consciously aware of.
Jaynes got part of this in his notion of “Structions”, and it seems to be far deeper.
JB Peterson seems to capture another level of it quite accurately in his “Maps of meaning”.

It seems that all any of us really has is what is in our own brains, and most of that will have gotten there through various modalities of communication, ranging from genetic to imitation to symbolic (including abstract conjectural). Very little of it seems to be actually internally generated novelty.

So in the sense of social communication and history, certainly there is a collective aspect.
In terms of conscious and subconscious communication in near real time, there is certainly a collective aspect, at least with the groups we interact with.
Anything beyond that seems to be conceptual over-reach.

[followed by]

Andrei Mirovan, John Case Schaeffer, Sigurd Vojnov
OK – took me two hours of reading and building relationships to get to this point in this thread – and to come to the conclusions below.

1/ *Knowledge* in the strong classic sense seems to be a falsified simplification of something.
If one starts on the premise that it might be true, and does experiments in reality, then one ends up at Heisenberg uncertainty, that states that full information about anything real is not allowed, which seems to imply that all “knowledge” (in the looser sense) of reality must be some sort of simplification or approximation to something at some level.

Thus one is forced to abandon the hard classical approach, and move to an approach to understanding and knowledge that is based in probabilities (which is what science does – for those of us engaged in science who understand probability as something more than simply a process one must do to get a paper published).

2/ Even in the realms of logic alone, Goedel incompleteness seems to say something profound about uncertainty in relation to *TRUTH*. And I spent 9 months working through Goedel until I was happy that I couldn’t see any errors in his work (the only person I can say that about, and the only one I have looked at who said nothing about “reality”), and am not going to say more than that here.

3/ Many people seem to be using logic and reason inappropriately.
When one looks at the evidence sets we have about history, cosmology, physics, biochemistry, computation, neurophysiology, higher order logics, complexity and computation in strategic environments, etc, then it become clear beyond any shadow of reasonable doubt that all of our systems are heuristic at base.

Those systems seem to have been selected over deep time by the simple expedient of differential survival, at many levels, simultaneously.
The complexity is high – very high.

Naive attempts to use simple first order logic to understand or argue about them are a complete waste of time.

We must all accept uncertainty and chaos as part of our reality.

We must all accept that even in the presence of chaos some things can be extremely reliable.

We must all accept a degree of humility and fallibility and give respect to other individuals with value structures and conceptual structures that are very different from ours.

If we fail to do so, the probability of any of us surviving is not great.

In a certain sense – the emergence of Trump as a phenomena is precisely the result of the sort of arguments in this thread – trying to defend and justify one interpretation as being better than all others.

Sorry – that is no longer a logically tenable position.

We must accept and respect diversity.
No other option has any significant survival probability.
Of that I am confident beyond any shadow of reasonable doubt!!!

[followed by]

I like the sort of skepticism that puts probability estimates on all claims 😉

[followed by Pranav – But should we not be skeptical about skepticism itself as well?]

Absolutely – within probability bounds 😉

[followed by]

And evolution seems to have supplied us with about 20 levels of patterning systems – as our starting point (or more correctly our conscious awareness seems to be able to instantiate at about level 15, and others implicitly instantiate from culture thereafter).

[followed by – different subthread]

*TRUTH* must be abandoned by those who seek understanding.
It functions as a bias… One sees only what one expects to see 😉

[followed by in answer to What alternative?]

Heuristics.
Useful approximations to something that seem likely to work in the circumstances.

When push comes to shove – that seems to be what we actually have.

[followed by]

There is an old saying – “suck it and see”.

We don’t need *truth* – we need useful probabilities.
The more one actually investigates reality, and the methods of investigating reality, the more profoundly one gets to appreciate the idea of reliability or probability or uncertainty or confidence (all aspects of the same thing).

That seems to be what we have.
Might as well accept it, rather than throwing some sort of tantrum and demanding a simplicity that, beyond any shadow of reasonable doubt, simply does not exist.

[followed by 5/9/17 – what probability my statement?]

Hi Sigurd Vojnov,

A fairly high probability – something over 99%.
How did I figure it out? Something over 50 years experience, contemplation, exploration – far more than I can possibly communicate in any time I have available. A few years ago someone asked me exactly how I came to a particular conclusion, it seemed simple enough to me, so I started to write it out. After about 10 hours of writing it was clear that it would take at least another 40 hours, and it simply wasn’t significant enough to be worth the effort, yet that 50 hours of communication was available to me internally in under 10s.

My point is that it appears highly unlikely that 100% probability is ever applicable; and we may approach it in some contexts, with some heuristics.
So in the abstract yes, one could say that absolute truth is %100 probability, and in reality it seems highly unlikely that anyone would ever have access to 100% probability (even in discussions about abstracts), and thus the hard abstract sense of “TRUTH” seems to be unavailable – for all the reasons outlined in response to Andrei elsewhere here.

Posted in Philosophy, understanding | Tagged , , , , | Leave a comment

Laurie’s Blog – Essence?

Bare Naked

When was the last time you got bare naked—right down to your essence?

Hi Laurie,

The term essence seems to me to hide some assumptions that don’t hold up very well.

Sure we can strip away superficial stories that over simplify the complexity that we are, and we are each complex, much more like a city than like a singular entity, and there is a singular aspect that emerges from that complexity.

It seems to me that there is power in understanding both the singular and the multitudinous aspects of what we seem to be.

It seems likely that I’ve gone a bit deeper into that exploration than most.
I spend quite a bit of time in those depths.
In that sense, I’m about as naked as anyone ever is.

Posted in Laurie's blog | Tagged , | Leave a comment

Foundations of Logic – Truth ?

What if we defined the validity of an argument in terms of falsity? To say that argument is valid iff the falsity of premises guarantees the falsity of conclusion? What’s so special about truth?

Two very different classes of consideration here.

One class of considerations is in respect of sets of premises and consequences. In this class of propositions, one can make claims about consequences following from propositions, and substantiate them with argument, and make claims about the truth or falsity of such propositions with considerable confidence.
The sets of premises and propositions possible both seem to be infinite classes.
One can construct infinite classes of possible logics (of which Boolean logic is the simplest) from such a set of propositions.

The other (and more common outside of the set of logicians) application of the notions of truth and falsity apply to “reality” (whatever it actually is – this matrix we find ourselves in, and seem to be part of).
In terms of our building models of reality, we can postulate premises, then design experiments designed to falsify those propositions, and can come to some determination of the degree of confidence we have in the many aspects of the experiment (design, operation, measurement, interpretation), and come to some statement of probability about the likelihood of the premises having been falsified by the outcome of some set of experiments in some set of contexts.
This is the scientific process, and using it we build confidence about using sets of models in sets of contexts.

Different sets of premises seem to have different reliability at different scales of reality.

At the scale normally available to human perceptions (collections of more than 10^15 atoms, existing for more than 10^-2 seconds) then most things seem to follow causal rules most of the time (to very high degrees of accuracy).
At the scale of the very small (smaller than an atom, and times less than 10^-40 of a second) a different set of rules seem to apply, that are far more probabilistic, and involve a sort of fundamental balance between order and chaos in terms of pairs of properties. The logics that apply to this quantum realm appear to be quite different to the logic of ordinary experience, and take quite some time to gain any sort of intuitive familiarity.

When one takes the further steps following the likes of Wolfram or Rachel Garden into non-bivalent logics and beyond, things can get quite messy, and even the meta notion of falsity can become blurred, leaving only degrees of confidence.

So it seems clear to me that truth and falsity are simple notions that one needs to learn, and use as tools and ladders, and as with most things, it doesn’t pay to get too attached to any particular tool. The old saying – when the only tool you have is a hammer, every problem starts to look like a nail – has some “truth” to is ;).

[followed by]

Hi Andrei,
It is much easier to disprove a claim by finding a single exception that to try to enumerate an infinity to prove something.
That seems to be it in a nutshell.
The more often you fail to find exceptions where you think them likely or possible, the greater the degree of confidence one can have in using a particular model in a set of contexts.

And there are traps in that.
It is clear that reality is so complex that we all have to use heuristics to make any sort of sense of it. Thus we are recursively subject to heuristic blindness.

Adding to that, in terms of survival, and the evolutionary utility of “truth*” what we need to survive is things that have a reasonable probability of being useful to us. With our limited memories, there is no point cluttering them up with stuff that doesn’t work. Focusing on what works is strongly selected for.
At the same time there is even stronger selection for the accurate identification of existential danger. Its worth forgoing a few benefits if it reduces existential level risk.

Hence – we have many levels of impulse to focus on “truth*”.

[followed by]

Hi Andrei,

Your candidate answer comes close, and it makes a set of assumptions in doing so, one of which is the notion of “TRUTH”.

I understand the classical history of the notion of “Truth”.

I find the notion doesn’t stand up well in reality, when one looks at the evolutionary history of us, our thinking machines (brains and their sensory systems), and our culturally derived operating systems.

In the paradigm I am now using the very idea of “TRUTH” appears to be a simple approximation to something in almost every non-trivial case.

[followed by]

Hi Andrei,
Try thinking of it this way, for a hint of what I am pointing at.
If one considers the set of possible Turing machines, it seems to be infinite.
Each one can in theory solve any solvable problem.
However, each one will, when instantiated, be in a particular physical environment, and have impacts on that environment, and be impacted upon by that environment.
So while one can postulate a theory about outcomes being identical, in reality they never are (time to solution and energy consumption to solution are often extremely important in evolutionary contexts as a couple of examples).

What might the term *PURE* logic mean?
Does it mean the simplest of possible logical systems?
Does it mean the totality of all possible logics (including fuzzy and non-determinant logics)?

Sure – there is a sense in which the structure you propose does map well onto the simplest of possible logics, and it doesn’t necessarily map well onto more complex logical forms.

The physicality of brains does influence how we think.
Our genetic and cultural histories are major influences.
One can postulate that they aren’t there, but even making such a postulate is (in a deeper sense) clear evidence that they are there.

I am stating quite categorically that the physicality of the hardware does influence the kinds of abstracts one is likely to *see*.

I am claiming that there is no *PURE* out, that reality has impacts, we appear to be real, and that reality matters.

It is the term *PURE* that I have serious objection to.
*Contextually sensitive useful approximation* I can live with.

And as a special case of n=1 in the infinite set of possible logics, yes I could accept it.

[followed by]

Hi Andrei,

You are proposing truth values that can be only either 0 or 1.
That is a possible logic system. The simplest one possible (the classical one, the most trivial one in a sense).
The next simplest seems to be a trinary – with truth values 0, 1 or unknown.

Imagine a logic based on truth values that are probability distributions that may asymptotically approach (but never reach) 0 or 1, with any distribution being allowed within that set of constraints. In some interpretations that is what the evidence from QM seems to point to as being our “reality”.

Logic is not a singular entity.
The simplest of all possible logics may be a singular entity, it is not a conjecture I have seriously explored. Having written that I have an intuition that it may be false – but I have no interest in doing the work to formally show that.

Thus the postulate that “an argument is valid if and only if it takes a form that makes it impossible for the premises to be true and the conclusion nevertheless to be false” is sensible in the case of logical system n=1, but not necessarily sensible in any of the higher order logics. One has to get comfortable fundamental uncertainty if one is exploring the space of all possible logics.

n=1 logic (the simplest of possible logics) certainly has its uses.
It is a very powerful tool.
It is not the only tool available.
It is a mistake to think that it necessarily applies to anything, except the abstract system itself.
And it can deliver useful approximations to many real world situations.
Just as hydrogen, the simplest possible atom, is the most common, so the simplest of possible logics is also common, but not necessarily universal (the universe does not consist only of hydrogen).

The idea of validity is a bit slippery.
In one sense it is axiomatic to the classical system.
In another sense, it doesn’t necessarily apply to any other logic in that simple form.
In other logics, the closest one can get is a form of consistency.
And as Sigurd says – its complicated.

Posted in Ideas, Philosophy, understanding | Tagged , , , , , , | Leave a comment

London Futurists – how do we create our future

Future Consciousness: The Path to Purposeful Evolution

How do we create a good future?

Hi Tom
You ask “How do we create a good future?”

To me, that is the most profound question possible, and one I have been consciously actively pursuing for over 40 years (and less consciously for a couple of decades prior to that).

So many deep questions implicit in that sentence:
What is good?
How much influence can we have on aspects of our future?
What are we?

That latter question is deep.
The old adage – know thyself – seems to open a potentially infinite path of successive approximation – we do in fact seem to contain that level of complexity.

My own explorations started in the realms of biology and spread across the realms of logic, computation, strategy and complexity.

It seems that every human individual embodies some 20 levels of complex systems, and every level has many instances of different systems.
At best we can work at some sort of useful set of heuristic approximations to such complexity.

It seems that individual life, and their responsible freedom are primary.

It seems that every one of us can most powerfully work on ourselves, on the integrity of our reasoning, on the acceptance of diversity, on the building of trust (not naive trust, but stable trust, with consequences if broken), on accepting failure as a necessary consequence of exploration of the unknown.

It seems that our conscious (rational) awareness is a tiny, yet important, part of the vast subconscious sets of embodied wisdom that are our biological and cultural heritage and being.

The fact that we are in exponentially changing times, when the lessons of the past are not necessarily relevant to our changing present and future, introduces profound uncertainty to our existence in ways not present since the end of the ice age.

When one looks at evolution from a systems perspective, it is clear that new levels of complexity are the result of new levels of cooperation.
Competition tends to reduce complexity.

Thus one can make a strong argument that the emergence of entities like ourselves is predicated on cooperation.

And part of being human is the “hero’s journey”, being willing and able to brave the unknown, the chaotic, the unexplored and to return with value for society generally.

Understanding computational theory indicates that there are an infinite class of possible schema available, yet reality often places demands upon us to make decisions in very finite times. This forces us to, at every level, make simplifying assumptions that contain uncertainties that can be profound in changing times.

In this context, centralising systems has profound dangers.

Security is maintained my having massive redundancy, by having multiple simultaneous explorations of options available.

Part of that process has to have stochastic elements to allow us to overcome the biases of experts who are constrained by their lessons from the past, and are not necessarily open to the possibilities present in our changing times.

One fundamental aspect of having a future is survival.
To have a future one must have existence.
A future is predicated on survival.
Evolution is all about survival.
In the past, life has produced massive diversity and that which survived what was encountered got to leave the next generation.
We now have an option to go beyond that simple model, and to maintain existence for all.

We are rapidly approaching an age where fully automated systems can produce all the essentials of survival, not simply food, water, housing and technology, but also all the medical interventions to restore optimal function to any damaged or degraded systems within us.

This offers the possibility of new levels of freedom, and also demands new levels of responsibility.

All complex systems have boundaries necessary for their survival.

Morality and social cooperation and ecologically responsible action are such necessary conditions for the sorts of life that we are.

Valuing the individual and their liberty, in social and ecological contexts, seems to be an essential part of “good”.

[followed by]

Hi Annette,

I agree in a sense, that often reality imposes time constraints such that we need to use heuristic simplifications to make decisions.

Some of those heuristics are encoded in our genetics, some encoded in our culture.

We need such shortcuts to survive.

I am all for the use of reason and science when time allows, and we need to show respect for the embodied wisdom present from genetics and culture. And we need to be able to override those when we have strong evidence to do so.

[followed by]

Hi Anette,

My view is basically humanist – individual life followed by individual liberty, both demanding responsible action in ecological and social contexts.

In my understanding, the conscience you write of is one of the heuristic systems I write of, and it is a very complex multi level phenomena.

I agree with you that there are many out there with political agendas that are anti human who attempt to use ecology as an anti-human tool, and it is not that for me.

For me, ecology is simply accepting that we are part of and reliant upon the evolved biological systems around us in ways that we are only beginning to understand.

Like you, I rebel against tyrants and nihilists alike.

I am very confident that if we are to have any significant chance of surviving, then the individual has to have primacy, and that must be in responsible ecological and social contexts.

[followed by]

I am not for one moment suggesting that we drop our own importance.

I have been explicit about values:
1/ individual sapient life; and
2/ individual liberty.

And I am being explicitly clear that both our life and our liberty require us to be responsible in social and ecological contexts, and limit what some might naively call free choices, to those that actually allow us a reasonable probability of survival in the long term.

As someone who has been studying us for over 40 years, I see our conscience as a systemic aspect of our being, and like all others it seems to be based in heuristics at various levels.
It is a very powerful thing, and I strongly advise using it.
And it is just one of many very powerful aspects of being human.
We need all of those aspects – intuitive, conscience, rational, habitual, cultural; and none are necessarily applicable to our exponentially changing future.
Making those judgments has to be at least as much art as science – we have no other option.

[followed by]

Hi Tom

Agreed.
And to me it is important to accept that each and every one of us are “many things”, and which one gets to express is very much a function of the context encountered.

Being very conscious of the contexts we create, all levels, is important – existentially so.

I make the strong claim that the native incentive structure of markets and money is no longer a good fit to that set of needs.

I also make the strong claim that we need to make substantial systemic changes if we want to avoid serious existential level risk.

And having watched people choose death rather than change diet, it is clear that we have not empowered people generally with the sorts of mental tools required to make those changes at the individual level, so there needs to be a deeper set of changes in contexts to enable the levels of security and freedom possible.

There is a profound responsibility present.

Posted in Our Future | Tagged , , , , , , , , | Leave a comment