[ 28/December/21 ]
I was going along kinda OK, up to the point of suggesting “cryo-preserved squid eggs arriving from space”. Clearly no real knowledge of biochemistry there. The HOX genes are what make complex organisms possible, and it is clear that all complex organisms share a single worm like ancestor with proto-eye spots, as the same embryological gene expression triggers eyes in us, octopi and insects. The evidence for common ancestry is beyond any shadow of reasonable doubt – for those that actually take the time to look (and it does take a few years).
I have no doubt that complex chemistry can happen in space, but I have severe doubts that complex life could evolve there.
If I had to place a bet on the origin of life, the evidence seems to be suggesting a “white smoker” (alkaline hydrothermal vent) origin for life on earth.
And it seems like only once. And it seems probable that it was originally RNA based, and it was a doublet codon, with only 4 amino acids, and the triplet codon system came later.
So no – not good science.
[followed by Josh asked – what about tardigrades]
It is the same as ours.
The same general classes of control systems, the same general classes of metabolic systems, the same classes of HOX genes.
Sure, convergent evolution can and does happen, but when you look at the biochemical details, the convergence is usually at the phenotypic level, and the underlying genotypes are usually very different.
The probability of life evolving independently, sharing the same types of fundamental systems that we share, is so vanishingly small that it is close enough to zero for all practical purposes.
Panspermia is so improbable – when you do the numbers on it. It just doesn’t actually solve anything. All it does is push back the origin problem to some other place. And the chances of life surviving through interstellar space, for the time taken for any sort of conceivable collision event that didn’t entirely vaporize the original organism…… Ockham’s Razor takes us here to Earth, some 4 billion years ago, and evolution does the rest. And while evolution can start simple, it quite quickly gets extremely complex. It requires appropriate degrees of consistency and variation in both the genetics and the environments. It needs randomness and order in appropriate degrees, and it is capable of selecting for both to some degree.
As to why “white smoker”? We clearly evolved in an alkaline chemical environment, and there was clearly a source of hydrogen ions to power the systems prior to metabolism taking over. The evidence for “white smoker” is deep, and you do need to do the work to understand the chemistry, if it is going to make sense.
And we have no way of ever really knowing.
All we can have is sets of probabilistic conjectures; and of the dozens I have looked reasonably closely at – white smoker is the lead contender for a wide set of reasons.
[followed by Which scenario do you find more like: an advanced alien species has visited earth sometime in human history OR we are living in a simulation?]
I am an autistic spectrum geek with an IQ over 160 who has been interested in biochemistry and evolution for over 50 years. I’ve got libraries of data inside my head, and vastly more abstractions and relationships between those datasets than I could possibly communicate with any technology available today. I am more than a little unusual.
Neither scenario seems particularly probable to me.
It seems very probable to me that we (and all other life on this planet) are the result of evolution here on earth – probably from a single cellular ancestor.
And the biases that evolution has necessarily installed in our neural networks that make us prefer simplicity over complexity mean that we all necessarily live in our own personal Virtual Reality versions of whatever objective reality actually is.
If you spend time looking seriously in the mathematics and systems of General Relativity, Quantum Mechanics, and life generally, then it becomes clear that reality is necessarily more complex and fundamentally uncertain than any computational entity can deal with in anything remotely approaching real time; so we all must, of necessity, (human, AI, whatever) make our simplistic models of it in order to make any sense at all of it. For us as human beings, that happens at multiple levels (sense organs, sensory systems, and multiple levels of subconscious brain) before anything appears to consciousness.
And sometimes reality demands rapid responses for survival, and simple models allow that to happen. So they have their uses. And part of wisdom is learning what sorts of contexts that they are appropriate to.
If you take a look at either Stephen Wolfram or Garret Lissi as to their conjectures on the fundamental nature of reality, then it really is far weirder than any simulation hypothesis.
Another thing to think about in terms of brain function is the source of signals within the brain – less than 20% of them are external. The vast bulk of brain signals are internally generated.
What do you mean by “point of singularity”?
Do you refer to the idea of a technological singularity, which really isn’t a good name, because all it really means is that AIs exceed human abilities to imagine and create our future. By definition, we cannot predict what will happen beyond such a point.
With AI doubling in capacity every 2 months, it is not a matter of if, simply of when. And the time is measured in double digits of months rather than years.
And it isn’t simple, because there are necessarily fundamental uncertainties and unknowables in reality. So I suspect that it will be different, and for many not so different.
It is already the case that there are multiple levels of awareness, understanding, and planning present in reality today – with humans. If we do AGI well, then it will likely be a benevolent friend in need, and mostly it will just do its own thing; and that is a deeply dimensional strategic “space” if it is to be “done well”.
No, that is not quite what I said, and I can see that if one over simplifies the starting assumptions, then they do appear identical.
What I did was state explicitly that some aspects of reality are fundamentally unpredictable.
What I left unsaid but implied was that evolution seems to have installed in human brains multiple levels of systems that usefully approximate random search in the face of such complexity.
The search space is so large that any agent is capable of making searches that other agents are unlikely to repeat in the age of the universe. Increasing speed of computation doesn’t fundamentally alter that reality.
So while AI will be able to solve some classes of problem better than humans, for other “interesting” classes of problem, they are effectively constrained to something reasonably approximating our speed. They are definitely faster, but the space to be searched is so vast, that their speed makes no significant difference to the outcomes.
So for some classes of problems, AI will blitz humans – already does in fact – any class of problem with clearly definable boundaries they are already far better than humans. When I was a small child I found such problems interesting, but I lost interest in them in my teenage years.
When one looks deeply enough into evolution, the only survivable strategy for complexity is cooperation. Any level of all out competition in complex systems self terminates. But those addicted to simple models cannot possibly appreciate that complex reality (both Sam Harris, and Richard Dawkins display such addictions, and they have plenty of company – they are just a couple of high profile examples – and in many aspects I like both, and I am eternally grateful to Richard for writing Selfish Gene).