[ 19/July/22 ]
Great work – thanks again Lex.
Steve has impressed me for many years, and I spent quite a bit of time playing with his software Minsky a few years back.
Agree we need to get off planet, and that needs to be the moon, in the first instance, not Mars.
Moving earth to an orbit out beyond Mars is not that difficult with a Gravity Tug, Takes a few million years, but we have time. Can use solar cells to generate thrust by redirecting protons from the solar wind. But no real urgency on that just yet.
Imagination is simple in a sense – it is essential random search, within some set of constraints. Pure random search delivers a halting problem – that needs to be avoided – so some level of constraints are required. There is definitely an aspect of having sufficient computational ability to get to human level consciousness, and if Jeff Hawkins’ Thousand Brains model is anywhere near accurate (and it seems to be a reasonable approximation) and if Seth Grant’s work on the protein makeup of the post excitatory synapse is reasonably accurate (and again it seems to be), then the level of compute in those protein complexes puts Ray’s calculations out by many orders of magnitude. I think we will get sentient sapient AGI, but probably not on Ray’s timeline, and if it is to be survivable it will be a little further off, because it needs to have a fundamentally cooperative base if it or us are to survive long term. I agree with Elon that there are a lot of dangers there, but probably for very different reasons.
Intelligences is much more than a survival strategy between predator and prey. It is about survival, certainly, but the threat class is far greater than “predator” or “lack of prey” or “avoid being prey”. And having threats external to the population is essential for the initial emergence of intelligence, and having intelligence survive long term demands evolving ecosystems of cheat detection and mitigation systems (and with technology, we can now increase Dunbar’s number to any population our sun is capable of supporting). Another deeply complex subject.
Agree completely with Steve, that overly simple models and the arrogance of ignorance are existential level threats to humanity, and I am quietly optimistic about our long term future, as more people do in fact seem to be waking up to the fundamental need for both cooperation and responsibility. And as Steve mentioned early in the interview, economic efficiency leads to fragility – we need both redundancy and diversity to cope with shocks.