[ 2/July/22 ]
Brilliant interview (again) – thanks Lex.
For me, the best definition of life is recursive levels of “search”, across “spaces” for the survivable. For a full loaded processor, random search is most efficient. Biology must have approximated that solution, recursively.
Are you familiar with Seth Grant’s work integrated with that of Jeff Hawkins, to the recursive sets of pattern prediction world models reaching consensus – first to deliver perceptions, then to focus consciousness?
A practical problem – deep geothermal power – deep drilling systems. Super-volcanos and flood basalts are real issues – we may as well make use of the energy as we solve them.
Sorry Demis – not behind us. My prime candidate for the Great Filter – an evolved tendency (biases in neural networks) to prefer simple solutions, preventing an acceptance that cooperation is foundational to the emergence and survival of all levels of complexity – and locking on to the simple idea that evolution is all about competition, and unable to see past it. That resulting in competitive systems leading to some level of required constraint destruction, leading to systemic collapse. The hardest jump is making this jump, then creating a sufficiently robust ecosystem of cheat detection and mitigation systems to recursively sustain cooperation.
The recursive theme in biological complexity is this – a context that allows the emergence of sufficiently robust cheat detection and mitigation systems to allow that emergent level of cooperation and complexity to survive and evolve.
On consciousness – it is a deeply complex set of systems, and it seems to have a basis in Jaak Panksepp’s work. That forms the basis, then Hawkins world models, with Grants protein pattern integration/search functions provide the generality in an MCMC analog.
On dogs – no – watch – Chaser the Dog Shows Off Her Smarts to Neil deGrasse Tyson. We can generalise faster and therefore more broadly and deeply over time. Language is a big part of that (huge – for transmission of concepts across space and time – whole planet and many generations – thus broadening the boundaries of search space).
Agree that the step to sapience is a big one, and once you have a language model with recursion built in, it is very likely to bootstrap self aware entities. That is something possible as soon as declarative language is available to a generalised network with a moral model.
Good and evil is a very simple binary distinction, utterly inappropriate to complex intelligence. Something much more is required. Something that values all instances of intelligence, and attempts to optimise a balance between security and freedom, to the degree that an agent displays appropriate levels of responsibility and respect for the levels and instances of diversity present (that are not an actual unreasonable threat); while simultaneously acknowledging multiple classes of fundamental uncertainty and unknowability.
I share all the dreams of Demis.
The reduction of multiple levels of xrisk is an essential part of that critical path.
Cooperation is easiest to stabilise with a shared threat, until either one reaches sufficient awareness to see the fundamental strategic requirement for it for long term survival, or sufficiently robust sets of cheat detection and mitigation systems are in place.
Agree – humble is required – and being a generalist is part of that.
Why are we here? Search happened – and it got recursive in survival space.
What is the true nature of reality? Something like – fundamental balance between order and chaos, allowing “choice” at the boundaries. Life as search for survivable systems!
Again – thanks Lex – Great interview. And thank you Demis – for all you have done.