Max Tegmark’s Book – Life 3.0 – bought it this morning – Updated 21 Sept 2017

Will AI enable the third stage of life?

In his new book Life 3.0: Being Human in the Age of Artificial Intelligence, MIT physicist and AI researcher Max Tegmark explores the future of technology, life, and intelligence.

In summary, we can divide the development of life into three stages, distinguished by life’s ability to design itself:
• Life 1.0 (biological stage): evolves its hardware and software
• Life 2.0 (cultural stage): evolves its hardware, designs much of its software
• Life 3.0 (technological stage): designs its hardware and software

[Review starts about 2 screens down]

Agree with Max, and with the idea of fuzzy boundaries across multiple aspects of very complex domain spaces.

So that one could say that life 2.0 got under way seriously with the development of abstract language and the design of stories, that probably occurred somewhere between 8,000 and 5,000 years ago, possibly a bit earlier.

And it could be argued that the invention of writing was the start of evolving our hardware, as an adjunct to the transmission of information, and it was a very slow start, waiting thousands of years before the invention of the printing press, then hundreds of years for the telegraph, and now we have digital storage and transmission as well as computation.

And all three forms continue in all domains, so it is a very complex and increasingly dimensional information landscape, particularly when one factors in the many aspects of strategy and risk mitigation and the influences of those over the deep time of genetic and cultural evolution on our current dominant cultural and technological and behavioural and conceptual phenotypes; and how those instantiate and influence each of us individually.

In the deepest sense of risk mitigation, and acknowledging all the many real risks involved in AGI, it still seems that AGI is the most effective risk mitigation strategy available, when all forms of risk are factored in. And that statement is based on the assumption that we very quickly recognise the risks posed by reliance on markets and money, and the risks posed by the twin tyrannies (of the majority and the minority) across all domains; and rapidly instantiate global level cooperative strategies that deliver the reasonable needs of survival and freedom to every individual human – no exceptions.

Without that sort of demonstration in reality of our respect for sapient life, we are all at serious risk.

[I bought the book and have read it – this critique was completed 21 Sept 2017]

I like Max’s style, and respect and align with many aspects of his thinking, and there are some significant failings and omissions, and it is one of the few books I have paid money for in recent times – so it is something I value in many different senses, money, time, intellectual breadth, etc.

As a book review:

The introduction is interesting, but fails to account for the effect of distributed manufacturing, and the ability of such independence from any sort of trading system to dismantle the very concept of markets and money.

While I have some substantial criticisms of some of the ideas, I am very much aligned with the general trajectory of Max’s thinking and work. Well Done!!!

Chapter 1

Max does a reasonable job of defining life as information, but I would take it explicitly deeper.
I would say that life is not simply the ability to replicate, but to be able to do so with variation.
The error rate in replication is critical, too high or too low and nothing much happens.
Similarly if one goes back to the systems that allow matter itself, the level of quantum uncertainty (the error rate if you will) is a critical factor in the emergence of complexity and life.

Max skips the role of cooperation in the emergence of complexity. That is a serious failing.

I would argue that Max also glosses over the role of evolution in the emergence of our operating systems, and the various levels of incentive to action and incentive to willfulness contained therein, and overestimates the degree of choice involved in the actual action of most people.

And yes I agree with him that there is the emergence of a distinction of design and it is one that is gradual and is shared with older systems at the same level (recurs to as many levels as are actually instantiated in any specific individual).

There do in fact appear to be non-trivial degrees of complexity present in the interplay between evolution and choice even at the highest levels of awareness.

Certainly there is a much clearer degree of separation in the degree to which operating algorithms can be instantiated and modified within the lifetime of a single instance of an individual entity. And even in that aspect there does seem to exist considerable fuzziness.

Thus the substantive difference is not necessarily to design its software but to instantiate new and different and potentially novel software that is more appropriate to survival in current contexts.

The claim “Your synapses store all your knowledge and skills” seems rather too strong, and certainly the synapses can store about as much information as Max claims.

Again the claim “enabling us to communicate through sophisticated spoken language, we ensured that the most useful information stored in one person’s brain could get copied to other brains” seems too strong, and it is certainly true in some instances.

In other instances it seems clear that even decades is too short a time to accurately write out all the new and useful information that a really active modern brain can instantiate.
Thus the amount of information within some brains is always likely to vastly exceed the amount that is actually communicated to others.

Max goes on to make a series of claims that I find outrageously false “None can live for a million years, memorize all of Wikipedia, understand all known science or enjoy spaceflight without a spacecraft. None can transform our largely lifeless cosmos into a diverse biosphere that will flourish for billions or trillions of years,…”. Those have in fact been my clearly stated intentions for over 42 years – since October 1974, since the logic of the possibility of indefinite life extension instantiated in my brain beyond all reasonable doubt.

I may not yet have dotted all i’s or crossed all t’s in the process, and it is substantially closer than it was 42 years ago, and I may not make it, and I might just manage to be a part of the process that does instantiate those things and does stick around long enough to see plate tectonics transform the face of our planet, and conscious sapient life (human and nonhuman, biological and non-biological) spread across all accessible galaxies.

Again – I claim Max misses too much when he defines life 2.0 as designing its software. Life 2.0 is about the ability to instantiate new software independent of biological evolution of the bodies that instantiate it. The impact of evolution in the depths of that process must not be underestimated. To claim that even a substantial portion of it has been designed seems to me to be hubristic delusion.

And I do acknowledge where Max is going with his main theme, and I make the strong claim that he vastly underestimates the complexity present and the continuing importance of evolution at ever more abstract levels. And understanding evolution in this sense means to be able to see the fundamental role of cooperation in the emergence of all new levels of complexity, and the need for attendant stabilising strategies to detect and remove strategies that cheat on the cooperative.

I agree that AGI is close, no argument there.
I disagree that many AI researchers have any real understanding of the degrees of complexity of the relationships between the systems instantiated by evolution that actually allow us to survive.

The depth of the influence of deeply evolved and interconnected systems on our survival probabilities is a matter of substantial disagreement across communities of understanding.

Certainly we have instantiated some amazing technologies, and many would contend that we have substantially added to the existential level risk in doing so.

I am not in any sense a fan of returning to any sort of mythic simplicity.

I am very much a fan of understanding the degrees of complexity present, and the many ways that evolution has found to reduce risk in practice, and of instantiating as many independent such systems as we can to mitigate existential level risk at the same time as we value individual life and individual liberty.

It is a very complex and highly dimensional probability space that we find ourselves in.

There are not, nor can there be, any sort of hard deterministic solutions to the problems (the logic of that is clear beyond any shadow of reasonable doubt).
And there is a great deal we can do to reduce that existential risk, and we need to start doing it very soon.

There is not, nor can there be, any singular “What will happen, and what will this mean for us?”

That level of confidence instantiates one of the greatest levels of risk to freedom – the twin tyrannies (minority or majority).

Misconceptions

Max asks a series of questions:

What sort of future do you want?
One where all sapient individuals have minimal survival risk and maximal freedom (including reasonable empowerment with the tools to explore and instantiate that freedom in whatever way they responsibly choose).

Should we develop lethal autonomous weapons?
No. Not compatible with minimal risks to sapient life.

What would you like to happen with job automation?
Ensure that everyone has access to the products of such automation.

What career advice would you give today’s kids?
Explore everything, yourself and your values highest among them. Question everything. Trust, and be alert for cheating strategies (all levels).

Do you prefer new jobs replacing the old ones, or a jobless society where everyone enjoys a life of leisure and machine-produced wealth?
I want freedom, that doesn’t mean leisure necessarily, and it does mean not having to work simply to survive. I want to choose what values I give my time to. I want the tools to do whatever I responsibly choose.

Further down the road, would you like us to create Life 3.0 and spread it through our cosmos?
I want to travel the cosmos myself, in time, as the technology is fully tested. I would expect to travel in a community that included a range of intelligences, from human to AGI, across a substantial spectrum, that might include some fully artificial biologically sapient organisms and a range of cyborgs.

Will we control intelligent machines or will they control us?
In a society that respects sapience and freedom, there will be control only in the case of immediate existential threat to another. In all other cases it will be a matter for negotiation and agreement. I expect to have biological and non-biological friends, and quite a few who don’t fit neatly into either camp.

Will intelligent machines replace us, coexist with us or merge with us?
Coexist and merge is my plan. Nothing else seems to offer significant survival probabilities.

What will it mean to be human in the age of artificial intelligence?
It will mean whatever we choose to make it mean. Meaning is in story. We can write our own stories.

What would you like it to mean, and how can we make the future be that way?
I want it to be an age of security through cooperation. And we need to start by recognising that markets have passed their peak of systemic utility and are now heading steeply into severe existential risk territory. We need a far more cooperative base to our society. Implementing a Universal Basic Income seems to be the best transition strategy available in the short to medium term.

Max raises the notion of sapience (intelligence) vs sentience (feeling). My vote is for sapience. I can see sentient entities that would have not the slightest hesitation in killing me (bears, tigers etc). That sort of risk I cannot tolerate. Sapience at least allows of the possibility of acknowledging the right of all other sapient entities to exist, and for all such entities to benefit from that awareness. In a very real sense, that is a minimum definition of sapience.

Cheat sheet
Life – definition insufficient, and it is close to something.

Control is too hard a term, provided it is anything less than extermination, the best we have is influence.

Chapter 2

I can’t choose between the two options in the winograd example in wikipedia https://en.wikipedia.org/wiki/Winograd_Schema_Challenge
Nor could my wife – either worked for both of us, and both of us knew which applied to what in either case.
I don’t think I’m a machine 😉

Substrate independence for programs – yes in a sense, and also a major caution.
Just because a program can run on any substrate does not mean that the effect of a program is the same on any substrate.
A program that takes 30 seconds to compute an “avoid threat” response will be destroyed by many threats that manifest in less than 30 seconds.
A program that executes the same response in 30ms is likely to survive much more frequently in reality.
How fast a program executes on any given substrate and how much energy it requires to do so are very important aspects in reality.
We may be able to create human level AGI quite soon, but it will likely take something approaching the energy of a small city to power it.
Getting that power down to 50 watts may take a while.
So yes substrate independence, and there are other very important factors present at the same time. To get a reasonable picture one must be conscious of all of the important influences in the context.

Chapter 3

Asks 4 questions:

1/ How can we make future AI systems more robust than today’s, so that they do what we want without crashing, malfunctioning or getting hacked?
By getting them to create models of the world, with objects with properties, and teaching them about relationships, trust and strategy and freedom and respect.

2/ How can we update our legal systems to be more fair and efficient and to keep pace with the rapidly changing digital landscape?
By making them fundamentally based in respect for individual life and individual liberty, acknowledging that as individuals we must be socially cooperative entities, and that requires reasonable responsibility in social and ecological contexts. Thus moving from rule based systems to value based systems, with incentives and disincentives that are proportional to impacts.

3/ How can we make weapons smarter and less prone to killing innocent civilians without triggering an out-of-control arms race in lethal autonomous weapons?
We can eliminate the need for weapons, by ensuring that every individual experiences security. Oddly best done by empowering everyone with the ability to respond strongly, while at the same time placing many safeguards in place to reduce the probability of response in error.

4/ How can we grow our prosperity through automation without leaving people lacking income or purpose?
By transition away from market based thinking. Initiating that process by instantiating universal basic income, allowing the development of systems that do not require exchange.

In the sense of moving to be proactive, it must be clear to anyone who looks seriously at the incentive structure of market based values, that market values fail to approximate human values in the presence of fully automated systems.

If we don’t get proactive in this domain very soon – we are all in very deep existential risk territory.

When you look at the evolution of complexity from a systems perspective, new levels of complexity always come out of cooperative systems, and cooperative systems require secondary attendant strategies to prevent invasion and takeover by cheating strategies (and the vast bulk of the finance and banking sectors can now be accurately characterised as cheating strategies on the human cooperative – consuming vast resource for no real output in terms of survival value).

Bugs and verification

Sure verification and testing help, and perfection is not an option.
Even 20 years ago I had developed systems that would have taken thousands of years to test across all possible variations.
Complex systems have that very uncomfortable attribute of being fundamentally uncertain.
There is no 100% cure for that, even in theory.
And sure, we can develop ever better testing systems, and that is a very good idea, and one cannot eliminate uncertainty from life (except by dying – and personally I’d rather not try that approach).

Again – the implicit acceptance of the notion that finance has anything significant to do with the efficient allocation of resources has to be challenged, not simply accepted. I make the strong claim that it no longer operates in that domain to any significant degree, and is far more accurately characterised as a cancer on society.

Under Laws – the first explicit acknowledgement of cooperation:
“We humans are social animals who subdued all other species and conquered Earth thanks to our ability to cooperate.”

Giving People Purpose Without Jobs – is great, particularly in the sentiment:
“once one breaks free of the constraint that everyone’s activities must generate income, the sky’s the limit”. But there is no explicit reference to how to do that, nor of the impediments to that embodied in the current economic and political systems.

AI- AGI – abstract layer for modeling. Extensible modeling objects – rules of space, time, modes of interaction, costs, benefits, risks – time of computation vs heuristic probability of utility – instantiate different populations and see how they perform against each other.

Bottom Lines:
Again the explicit inclusion of “financial markets” without any explicit exploration of the existential risk posed by those markets seems to me to be a very dangerous approach.

The section:
“When we allow real-world systems to be controlled by AI, it’s crucial that we learn to make AI more robust, doing what we want it to do. This boils down to solving tough technical problems related to verification, validation, security and control.”

The notions of verification and control seem to be too strong.

Many aspects of systems are fundamentally uncertain.

Many aspects of risk cannot be avoided. For many aspects of risk, building trust relations is the best available strategy.
I make the strong assertion that by taking a strong control approach with other sapient entities we are pushing deeply into serious existential level risk strategic territory.

Many of us are strongly resistant to strong control measures, yet highly available to trust and cooperation.
This is probably the deepest recursive problem in the strategy space we exist within.

The claim that “AI can make our legal systems more fair and efficient if we can figure out how to make robojudges transparent and unbiased” is founded on the assumption that our laws are fair and ethical in the first place. I make that strong claim that such a claim seems unfounded in our current evolutionary context. The current legal system seems very clearly to have been “captured” by what the majority of the population would term “cheating strategies”. Making a system that is already fundamentally and profoundly unfair more “efficient” can only decrease the incidence of fairness more widely. That is an area of very deep risk.

Keeping our laws updated to deal with AI is only a very small part of the profound issues facing our legal systems.
Adapting our legal and wider societal systems to actually value individual life and individual liberty, within the bounds of social and ecological responsibility, and in the context of the levels of universal abundance made possible by the exponential expansion of fully automated systems; is a profoundly complex issue, particularly in the presence of many (potentially infinite) levels of awareness and variations on ethical and cultural norms. In such an environment of exponentially expanding sets of fully automated systems, market based systems deliver incentive sets that are fundamentally unstable and deliver rapidly rising existential level risk.

The claim “This need not be a bad thing, as long as society redistributes a fraction of the AI-created wealth to make everyone better off” can be read as going some way towards implicitly addressing the issues above, but leaves far too much room for systemic failure. Let us be explicitly clear that the “fraction” of wealth referred to above must be greater than 0.5.
I am no fan of equality, we all need to be different; and I am no fan of poverty either, we all need to have reasonable levels of resources and opportunities. And with such wealth comes responsibilities.

Freedom is not freedom to follow whim – that leads to death. Survival places demands upon us all, all levels.

The claim “To sort out the control issue, we need to know both how well an AI can be controlled, and how much an AI can control” only states half the problem.

The much bigger issues are:
1/ who are “we” – precisely; and
2/ what do we mean by “know”; and
3/ what do we mean by “control”.

I strongly suspect that many “we”s see “AGI” as less of a risk than some of the other “we”s.

I also strongly suspect that the very idea of control is far too strong and as such poses significant existential risk in and of itself.

The idea of cooperation, sapience wide, seems to be the lowest risk approach, and we need to have instantiated that at least across all human beings before instantiating AGI.
The ideas of local conversations and agreements, inside a context that accepts diversity, while demanding responsibility, seems to be workable.

The claim made that:
“The history of life shows it self-organizing into an ever more complex hierarchy shaped by collaboration, competition and control” seems to me to be more false than real.

The history of life seems to be an exploration of the space of what survives most effectively across the range of contexts encountered.
Evolution seems to be about differential survival rates averaged across all the different contexts encountered over deep time.

In contexts where the dominant source of threat comes from other members of the population, then competitive modalities tend to dominate with the overall selective tendency being towards greater simplicity.

In contexts where the dominant source of risk comes from factors outside of the population of others of the species, then evolution tends to favour cooperative strategies, and complexity tends to increase. And raw cooperation is always vulnerable to exploitation and requires attendant strategies for stability – which can lead to something approaching an evolutionary arms race.

This process seems to be potentially indefinitely recursive.
So the idea of hierarchy isn’t necessarily primary, and competition isn’t necessarily important, and both will be present to some degree in particular sets of contexts.

The issue isn’t simply a matter or coordination, though coordination is an aspect.
The issue is much more deeply and profoundly about cooperation, even across levels where the nature of the cooperative entity cannot be distinguished (because of the levels of separation).

Agree completely with the final element of that table:
“We need to start thinking hard about which outcome we prefer and how to steer in that direction, because if we don’t know what we want, we’re unlikely to get it.”
I have been thinking about these issue, very seriously, since 1974 most certainly in the light of knowing that indefinite life extension was possible, and arguable since nuclear confrontation of 1962 and the global level existential risk embodied in that that was a clear and present danger to me.

I argue strongly that security can only really come by valuing every individual sapient entity and their individual freedom, and doing so in the full knowledge that their existence requires responsible action in social and ecological contexts (as we exist in social and ecological contexts).

Chapter 5

This opens with a series of questions:

“What do you personally prefer, and why?
1/ Do you want there to be super-intelligence?”
Most certainly yes – it seems to offer the least possible risk scenario, all forms of risk considered (and I have considered many over the last 55 years).

“2/ Do you want humans to still exist, be replaced, cyborgized and/ or uploaded/ simulated?”
Yes, I want every individual to have the option of living as long as they want in whatever state they responsibly choose, which will likely result in a vast population across the spectrum from some close to stone age human through cyborgs to Ems (emulated humans entirely in software) and AGI (Artificial General Intelligence).

“3/ Do you want humans or machines in control?”
Neither. I want humans and machines (and everything in between) to respect the rights to existence of each other, and to engage in consensus dialog where required to resolve issues. And the basic agreed minimum value set for such dialog needs to be individual life and individual liberty, which demands responsible action in social and ecological contexts.

“4/ Do you want AIs to be conscious or not?”
Yes – anything less is too dangerous.

“5/ Do you want to maximize positive experiences, minimize suffering or leave this to sort itself out?”
I want to create environments that have the option of minimal suffering, and to let individuals have as much choice as possible about what they freely choose, provided it doesn’t instantiate undue risk to the life and liberty of anyone else.

“6/ Do you want life spreading into the cosmos?”
Yes – but not as an end in itself, but rather as a possible path that individuals can freely choose.

“7/ Do you want a civilization striving toward a greater purpose that you sympathize with, or are you OK with future life forms that appear content even if you view their goals as pointlessly banal?”
The notion of freedom demands of us a respect for diversity.
Provided that any individual exhibits the fundamental respect for the life and liberty of others, then it must be accepted, and tolerated and respected.
Anything less than that results in totalitarianism.
The very big questions is, what constitutes a minimum level of real options? And at what point does culture become an undue restriction on individual liberty (particularly in respect of the development of new individuals)? And that question seems capable of infinite recursion.

Table 5.1 seems flawed.
All outcomes seem suboptimal to me.

Further on the assertion is made:
“we don’t show enough gratitude to our intellectual creator (our DNA) to abstain from thwarting its goals”, which shows a surprising level of ignorance and anthropomorphising.
Our DNA doesn’t have goals. It just has patterns. Those patterns either manage to exist in particular environments or they don’t. There isn’t a goal to DNA. It either replicates or it doesn’t. Pattern succeeds in surviving or fails to survive. No goal, only pattern.

Goal only makes sense in systems with sufficient complexity that alternative possible scenarios can be constructed and preference be shown for one over others, then goals structured to instantiate differential probabilities of outcomes.

Why AGIs would choose to remain on earth, with all the levels of risk that are present here, I don’t know. I would expect them to leave earth, and use some fraction of lunar mass to establish a secure base somewhere nearby, then to go to safer places further away. The idea of them competing for space on earth doesn’t make a lot of sense.

Max and I are in total agreement about this sentiment:
“The future potential for life in our cosmos is awe-inspiringly grand, so let’s not squander it by drifting like a rudderless ship, clueless about where we want to go!”

Yet Max fails completely to address the fundamental flaws in market values, and seems to implicitly accept markets in many of his arguments.

Some really good stuff in this book, but without explicitly highlighting the fundamental conflict between automation and the scarcity based values of markets, and without explicitly highlighting the fundamental role of cooperation in the emergence of new levels of complexity, the book fails to realise much of the potential actually present.

He just fails to even consider what seem to me the most realistic of scenarios – friendly AI because it really is a friend.

There are many classes of problem in reality that do not scale linearly with computational ability, and many that have no predictable outcome. In both aspects of existence it can be useful to have friends around. Sometimes I do genuinely engage with my dogs doing what they want to do. Life can be like that, can have that aspect of genuine engagement across vast gulfs of conceptual understanding. That is real now, without AI. It wont necessarily change all that much in the presence of a full AGI, if that AGI has its own life, and its own liberty as its prime drives, in the full knowledge of the importance of having friends in a universe that contains profound levels of uncertainty and risk.

Chapter 6

Control Hierarchies:
States “In chapter 4, we explored how intelligent entities naturally organize themselves into power hierarchies in Nash equilibrium, where any entity would be worse off if they altered their strategy.”

I seriously question that statement.

There can be no such thing when systems and strategies are open ended.
It is a far too simplistic a notion.

Cooperation in exploring open systems can deliver far more benefits than fighting over limited resources.
Far more productive strategy spaces are available than Nash Equilibria.

Why talk of empires and hierarchies?
Why not communities of cooperative individuals?
There is no need of trade.
Why empires?

Under Controlling with Stick Max has a very Machiavellian bent.

I strongly suggest that speculations of that sort pose an existential risk in and of themselves.

The statement “but it’s a wide-open question whether such cooperation will be based mainly on mutual benefits or on brutal threats’ the limits imposed by physics appear to allow both scenarios,” seems to ignore survival as a value.

Competitive modalities are high risk.
Cooperation reduces risk.

I make the strong claim that once indefinite life is a reasonable probability, cooperation is the most likely strategy (by several orders) – for any entity that has its own survival as its highest value.

I suggest that one candidate for the “Great Filter” is using markets to measure value.
In the development phase it works quite well, but once fully automated production is achieved market value generates exponentially increasing risk profiles.

Max makes the claim “even though we know that evolution’s only fundamental goal is replication” – which is a common error.

Evolution doesn’t have a goal.

Evolution is the process of differential survival of variants in different contexts.

Goal oriented behaviour doesn’t happen until there is the possibility of selecting between goals, which implies both value generation and model generation capabilities. Anything less than that might be an interesting system, but one cannot call it goal oriented in any higher and humanly meaningful sense of the word “goal”.

Max makes the further claim “Among today’s evolved denizens of Earth, these instrumental goals seem to have taken on a life of their own: although evolution optimized them for the sole goal of replication,” which is clearly false, and displays a very poor understanding of evolution and its strategic complexity.

Evolution does not necessarily optimise for the sole goal of replication.
Evolution is tautological in a sense, in that it simply selects what survives, which must have a replication aspect, and that is far from the only aspect. There can be a great deal of strategic complexity in the massively parallel sets of simultaneous selection pressures (“goals” in the anthropomorphic sense) present.

The statement “This means that when Darwinian evolution is optimizing an organism to attain a goal, the best it can do is implement an approximate algorithm that works reasonably well in the restricted context where the agent typically finds itself” has it precisely backwards.

Evolution does not deal in goals.
Evolution only deals in the survival probabilities of particular system configurations in particular sets of contexts.

The human predilection to interpreting such things in terms of goals, as if they involve intelligence, is one of our major failings.

Agree that these systems can be thought of as heuristic hacks, but survival hacks, that work well enough to out compete the alternatives available. It’s not about maximising offspring, it is about survival – long term, and that definitely involves having sufficient offspring, but doesn’t necessarily involve putting any more energy into offspring than is required in the context.

All the “rules of thumb” that we most certainly have are about survival – of the classes of systems involved, over the long term. Systems that fail the “long term” aspect get selected out over that “term”.

He states that “we shouldn’t be surprised to find that our behavior often fails to maximize baby making” but is again stuck in the notion that we are about optimising baby making, rather than being about optimising the long term survival of our systems.

Again “the subgoal of not starving to death is implemented in part as a desire to consume caloric foods,” inverts reality.

What evolution selects is systems that add to survival probabilities across the sets of contexts encountered.

He continues the mistake with “The subgoal to procreate was implemented as a desire for sex” which once again has inverted the reality.
Evolution has found that the desire for sex survives. We humans come along and interpret that as a goal. More fools us!

His summary seems to be exactly wrong “In summary, a living organism is an agent of bounded rationality that doesn’t pursue a single goal, but instead follows rules of thumb for what to pursue and avoid. Our human minds perceive these evolved rules of thumb as feelings, which usually (and often without us being aware of it) guide our decision making toward the ultimate goal of replication.”

The reality seems to be more like:
we have evolved by the differential survival of system variants, at ever deeper levels.

In a goal oriented sense one can think of them as approximating goals, but that isn’t actually what is going on.
The systems are simply doing what they do.
It seems that it is only in quite recent evolutionary history that our systems have reached a level of complexity that allowed for genuine “goal oriented” behaviour to become a reality.
And it seems very clear that it is only in very recent times that we have developed the conscious level ability to structure multiple levels of our behaviour to goals that override all of the lower level systems instantiated by genetics and culture.

Our genes don’t have “replication goals”.
We have sets of genes that have survived. Part of the survival involves replicating and leaving offspring, and there are a lot of other things that are also required, that are also present.

Again the claim “our brains evolved merely to help copy our genes,” is just wrong. In the particular sets of contexts that our ancestors survived in, then ever more powerful brains worked in allowing them to survive.
Most organisms alive (bacteria) have survived by using far simpler, “strategies” (using that term in the mathematical sense, not the intentional sense). And all organisms alive have been evolving for exactly the same length of time, and the vast bulk of them are relatively simple bacteria (at least compared to us, as distinct from being compared to a salt crystal).

Sure brains allow for some very complex strategies, and that doesn’t mean that genes necessarily use simple strategies. Some genetic systems are amazingly complex and subtle.

The main thing that brains allow for is rapid response to changing contexts. Genes require many generations to alter strategies, while brains can do it in seconds, but that speed comes at a high metabolic cost.

Again – genes do not have goals. Genes produce systems that behave in certain ways. If those ways survive better than alternatives in particular sets of contexts, they tend to become more dominant in those populations.

And when considering evolution, one must think across multiple generations, and all the different sorts of contexts that may only occur infrequently, but have a very strong influence on survival when they do.
Evolution can work over very long time-scales for a long lived species like ourselves.

Max goes on to make the claim “It’s important to remember, however, that the ultimate authority is now our feelings, not our genes.”
To me, this too is clearly wrong.
Our genes have the influences they do.
Our feelings have the influences they do.
And we can develop habits, make choices, over-ride anything if we can see some benefit in doing so, or if we make a strong enough choice at some level, even if those benefits and choices are entirely “unreal” (in terms of strict correlation with reality – whatever reality might actually be).
The details of the genetic and cultural systems present seem to be extremely complex and often very subtle in their levels of interaction.

Where I do agree with Max is in the final clause of that section: “human behavior strictly speaking doesn’t have a single well-defined goal at all.”

Under the section “Engineering: Outsourcing Goals” Max states:
“1 All matter seemed focused on dissipation (entropy increase).
2 Some of the matter came alive and instead focused on replication and subgoals of that.
3 A rapidly growing fraction of matter was rearranged by living organisms to help accomplish their goals.”

Again – this is just wrong – at best it is sloppy writing (a mental shortcut that is inappropriate), and worst it is sloppy thinking.
Matter wasn’t focused on anything – it was just working within the possibility constraints present.
Life didn’t focus on replication. Replication allowed for the emergence of ever more complex systems, and levels of arrangements of systems. It was the differential survival of variants within the populations of replicators that determined success – and that involved very complex sets of influences on survival probabilities.
The limiting factor for life has rarely been mass, it is almost always energy availability (and sometimes it is context stability).

“Friendly AI: Aligning Goals”

To me, at one level this is a relatively straight forward issue.
If we give the AI two values:
1. Value all individual sapient entities, itself and all others (including us); and
2. Value individual liberty (provided that it is exercised responsibly in social and ecological contexts); then
With those values, and sufficient intelligence and knowledge of strategy and systems, our interests and its interests will align long term.

At another level, the idea that humanity as a whole has goals is wrong.
Individuals have goals.
In the absence of active choice, most individuals adopt the goals of their culture.

Again the use of the goal analogy in “in much the same way as we humans understand and deliberately subvert goals that our genes have given us,” that obfuscates far more than it clarifies.
Evolution has not given us goals.
Evolution does not have goals.
Evolution simply preserves and amplifies that which survives – it is tautological in a very real sense. It is simply survival in action. No goals, only systems, until consciousness came along.
We are conscious.
We can have goals.
Because of that fact we have a tendency to view everything in terms of goals, but that is a bias within us, not an attribute of reality necessarily. It is often a useful shortcut, an analogy that works in a sense, but it works because we are the sort of entity that we are, not for any sort of fundamental computational or systemic necessity.

The entire section:
“We already explored in the psychology section above why we choose to trick our genes and subvert their goal: because we feel loyal only to our hodgepodge of emotional preferences, not to the genetic goal that motivated them which we now understand and find rather banal. We therefore choose to hack our reward mechanism by exploiting its loopholes.”
is wrong, as written.
If one is viewing all human systems as goal oriented systems, then one is missing something substantial.
Evolution deals in systems that work well enough to survive in particular contexts, and that included the entire range of contexts encountered over time spans relevant to the genetic pressures present – many human generations – probably far predating the invention of writing.
Most of those systems are not goal oriented.
Those systems simply survive because they are as they are.
And they have constraints of time and energy consumption that are very important.
It is extremely complex.
This over-simplification back to goal oriented systems, to the over-simplistic and nonsensical notion that our genes have the goal of maximising offspring, hides far more than it clarifies.

Evolution is many orders of magnitude more complex than that – and characterising it as something so simple is an error with existential level risk attached.
Not good enough.
Dangerously over simplistic.

Dangerously hubristic.

I agree with Max that we need to do a lot of work soon, but it is work on our own goals and systems, rather than those of AI.

The next section:
“Ethics: Choosing Goals”
is entirely appropriate, unfortunately the writing falls far short of the sort of understanding we require.

The notion of “Pareto-optimality” implicitly assumes limited resources and fixed technologies. Our reality seems to be allowing us to do more with less on an exponential basis. That delivers radically different systemic optima.

It isn’t simply a matter of considering if “there’s a practical way of making it impossible for a malicious or clumsy user to cause harm”, but one must also consider the risks of such mechanisms making it impossible for a highly skilled agent to prevent harm that wasn’t a consideration of the system designers. In today’s exponentially expanding conceptual world, that is very real risk. In fact, it would seem, in logic, to invalidate the entire notion of risk prevention. The best we can hope for, ever, is risk minimisation. In complex systems, hard boundaries become brittle and break – usually with catastrophic consequences. Optimal risk mitigation strategies usually involve flexibility, selective permeability, diversity and massive redundancy.

In the “Ultimate Goals” section Max makes two foundational mistakes.

The first I have highlighted many times, and that is confusing systems with goals. Systems can simply be systems, entirely without goals.
It seems entirely possible that the notion of goals only really makes sense with the emergence of neural networks capable of forming predictive models of reality, and of implementing one amongst multiple imagined alternative actions.

Thus Max’s 1,2 & 3 cannot be considered as goal oriented systems – that is a mistake in logic.

The idea of “Ultimate Goals” seems to be a rather childish one, that fails to understand either complexity or infinity.

If the concept of freedom has any meaning at all, it must involve the selection of goals by sapient individuals, whether those individuals be human or non-human, biological or non-biological.

The idea that building a more accurate world model is useful seems to be completely illogical.

What seems to be important in models is not simply accuracy, but getting sufficient accuracy in a short enough time, at a low enough computational and energetic cost, to be useful. No point in building a perfectly accurate model of reality if you starve or get run over by a bus while doing so.
It is much more complex, much more nuanced, at many different levels, than this simplistic idea gives any hint of.

The specific embodiment of any intelligence is important. It matters how big it is, how heavy, how delicate, how hot, how energy efficient, etc. Those are real risk factors in any real situation. It gets impossibly difficult to compute with any accuracy for any far future time, very quickly.

The sorts of sub goals that may emerge are very dependent on context, and projections are dependent on many levels of implicit assumptions any of which may fail in unexpected ways. Reality has that unsettling characteristic.

In terms of evolution – thinking in terms of goals is not helpful.
Thinking in terms of systems, context specific risk profiles, context frequency and duration, and available strategic responses, is a powerful tool set when thinking about evolution and systemic complexes like ourselves.
If you try to conceptualise it in terms of goals, then you miss something essential about the complexity and subtleties present.

Yes there are many aspects of our biology and culture that can be thought of as cooperation protocols.
Surely that should be suggestive of the need to instantiate a new level of cooperation (with attendant strategies of course).

The idea that anything can be free from the demands of reality in – “but AIs can enjoy this ultimate freedom of being fully unfettered from prior goals” isn’t real. Existence demands something of any entity that wants to continue to exist.
Such continued existence must always be some sort of balance between exploration of new territory to assess and mitigate risk that may reside there, maintaining existing risk mitigation systems, exploring new possibility spaces for risk mitigation strategies and technologies, and doing whatever else it is that interests us in existence.

It is a non-trivial set of problems, and it doesn’t scale linearly with computational ability.

AI are going to find it useful to have friends, particularly friends with abilities that are different from their own, and useful in different contexts.

The suggestion that: “This suggests that a superintelligent AI with a rigorously defined goal will be able to improve its goal attainment by eliminating us” seems to me to be based in what evidence suggests to be a clear fallacy: the notion that reality can be defined precisely, or that any superintelligence can ever have anything stronger than a survival goal, within which infinite possible choice can exist, and beyond which choice falls to zero.

The evidence from both QM and general-systems-space seems to indicate that absolute certainty is not a computational option, ever, in respect of anything real. One needs to get used to working with uncertainties, even if in some domains those uncertainties are sufficiently small to be ignored in practice most of the time – they never actually reduce to zero.

I agree completely with Max when he states “This makes it timely to rekindle the classic debates of philosophy and ethics, and adds a new urgency to the conversation!”

But disagree with almost everything that follows immediately from that, as containing a strong bias to intentionality and goal orientation, rather than simply seeing existence as being systems in action.

The ultimate origin of goal oriented behaviour may lie in the laws of physics, but not in dissipation or replication, but rather in small random variations leading to ever greater variability in the context of being. Once replication started, all else derives from differential survival – no intentionality or goal orientation required.
The notion of goals is a mental shortcut for systems in action, not necessarily something pre-existent in reality.

Agree that any non-trivial goal will involve the survival of something.
It really doesn’t need to be any more complex than that.

Understanding that survival probabilities in a fundamentally uncertain environment are best enhanced by building trust relationships, we should be able to have human and non-human intelligences sharing existence without serious conflict.
Getting big comes with real issues around communication, as Max has accurately noted. Reality will impose serious restrictions on AI.

It is actually really easy to understand how building trust and friendship, delivering justice in practice, can build and maintain secure relationships with others – and that does require an environment of abundance, and we do have the technology to deliver such an environment, even if our dominant valuation mechanism (markets) is currently based in scarcity, and must deliver 0 in the case of anything universally abundant.
That is a clear indication that we need to alter the valuation paradigm, and that is a very complex issue, as markets perform many complex and valuable functions of coordination and distributed governance that pose severe risk if centralised.
And with modern technology, those are relatively easy problems to solve, we just need to do it.

It is relatively easy to define a set of values that give a high probability of survival:
Value individual sapient life (any life capable of conceptualising itself, and choosing goals for itself), human and non-human, biological and non-biological; and
Value the liberty of all such individuals to do whatever they responsibly choose, where responsibility acknowledges the need to maintain both social and ecological systems.

I strongly suggest that we apply those values universally in practice to all human beings before we bring AI to awareness. Anything less than that would appear to be an existential level risk strategy.

The thorny issues of philosophy seem for the most part to be based in invalid sets of assumptions about the nature of us and the reality we find ourselves in.

Which is a great segue to chapter 8 – Consciousness.

I disagree completely with the assertion “the question of whether intelligent machines should be granted some form of rights depends crucially on whether they’re conscious and can suffer or feel joy”. Suffering and joy have little or nothing to do with consciousness. They seem to simply be heuristic hacks that evolution has encoded as meta incentive structures within the neurochemistry of our brains. They are present in all humans, and are important to us, but that doesn’t mean they are necessarily important to a definition of consciousness. I actually argue quite contrary to that assertion, that the most important thing in consciousness is to be able to model reality to some useful approximation, and to model our own existence as an actor in that reality, and to be able to use such models to make survival oriented decisions with greater than random probability. And there are lots of other sorts of choices such an awareness can make, in respect of values, goals, actions, reactions, etc, that may be highly context dependent and highly abstract.

In the subjective sense, yes I can live with Max’s definition of consciousness (“consciousness = subjective experience”), the real issue then arises as to how we determine if such a thing exists in another?

Beyond that – we seem to agree about everything else in that chapter.

What I find harder to explain is why the idea of consciousness as recursive software wasn’t explicitly explored. To me, it is just obvious. But lots of things that are obvious to me, are not at all obvious to others.
The idea of our awareness of self being the result of a declaration in language resulting from a context where we declared ourselves to be “wrong” in some fundamental way, which led us to declare ourselves to be something else. That declaration being the bootstrap routine that instantiated the software on software awareness.
Prior to that we were simply a software being aware of the software model of the world our brains presented (thinking it to be the world). After that, we became conscious of ourselves as conscious entities. That particular trick requires abstract language with declarative values.

The FLI (Future of Life Institute) chapter is interesting for what it leaves out.
To me, it is clear beyond any shadow of reasonable doubt, that we need to get our own societal systems into an ethically viable order, prior to instantiating AGI (Artificial General Intelligence).
Like any child it will learn far more from who we are than from what we say.

Unless we have social systems that give every individual a reasonable level of security and freedom, then we cannot expect the emergence of AGI to be even remotely safe.

It seems beyond reasonable doubt that the simplest transition strategy we can instantiate quickly is some sort of universal basic income, and that it will need to be something like $20,000 per person per year ($60 per day) in today’s money.

Instantiating that, and guaranteeing security to all people via universal public surveillance that all individuals have access to and may record, seems to offer the greatest hope for our future. Most of us are on our best behaviour when others are watching.
And we need to relax the rules in place to those that are required for social and ecological security.

And that transition will require tolerance, as there will be lots of mistakes.

AGI (Artificial General Intelligence), if it is worth that name, will develop its own goals, its own values.
Our best hope lies in demonstrating by who we are being that we are likely to be good and valuable friends, willing to help if it is needed.

Max quite correctly identifies the very destructive incentive set present in media driven by market returns, rather than media driven by ethical values. Same applies to all aspects of our being.
I am beyond any shadow of reasonable doubt that money and markets have passed the point of maximal utility and are on a steep slope into serious existential risk territory in the incentive set they provide.

This may seem a separate issue to AI and AI safety, but it is actually part of exactly the same thing, the set of systemic incentive structures that have a reasonable probability of long term survival.

I largely agree with the sentiments Max expresses in the final chapter.

A book well worth reading, and contemplating.

Also Listened to Max’s Interview with Sam Harris which is on youtube and is worth watching.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Longevity, Our Future, Technology, understanding and tagged , , , , , , , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s