Did you believe in Santa as a child ? Do you remember how you found out the “truth” about Santa ? What do you believe about Santa now?
A friend with 10 yr old twins, a mom in the mom’s group I am fortunate to have as friends for parenting ever since my youngest son’s conception (he is now 10), shared a beautiful little article about the “sweetest way to tell your child the truth about Santa”. My favorite excerpt is this one –
“Santa is bigger than any person, and his work has gone on longer than any of us have lived. What he does is simple, but it is powerful. He teaches children how to have belief in something they can’t see or touch.”
If you are interested, you can read the entire article (it’s short) here – Telling Kids Truth About Santa.
Loved the Physics of Santa thing – didn’t deal with the energy required to accelerate and decelerate but one gets the idea.
I don’t remember exactly when I stopped believing in Santa – probably about 3.
For me, Santa Claus is in the same category as God and Bobby Decker and similar stories, as useful myths.
For me, belief in such things provides an incentive to consider longer term consequences of action, and an incentive to consider the consequences on others as well as on self, without needing to understand the mathematics and logic of games theory which explains why such actions are actually in the long term self interest of everyone.
For me it is one of the many myths that train children in the much needed craft of using the conscious mind to overcome the impulses to action of the subconscious brain. It seems that evolution has given us biological brains and cultures with many imperatives to action at many different levels that may have worked on average over time in the past, but are not necessarily in our best interests in the rapidly evolving present (such as eating sweet things in a chocolate shop – there weren’t any chocolate shops on the planet 100,000 years ago).
So I am all in favour of giving of the gifts one has to give, and I love the joy of doing so, and I understand the mathematics and logic of why such things are in my long term self interest, and I see clearly the danger of continuing to organise our society on the basis of markets and market values. The instigating of a universal basic income (to every person on the planet) might seem to be a useful transitional step, yet it seems to me to have more dangers than benefits.
So in a sense – I’m all for going straight to Santa Claus – and creating a society where automated machinery (silicon elves) do all the work necessary to deliver the gifts (food, shelter, housing, transport, communications, energy, transport, medical attention, etc) that empower us to do whatever we responsibly choose. All that is required of us is to respect the life and freedom of every other sapient being (biological and non-biological, earthly or beyond), and to take such active measures as we reasonably can to ensure such life and freedom (which of logical necessity includes caring for the life sustaining capacity of the ecosystems within which sapience exists).
So for me, the truth about Santa is quite different than the one referenced in the question, and I do love this time of year – for the general sense of caring that does in fact manifest. And perhaps the fact that I live on the other side of the planet and it is mid summer here, and not a celebration of the passing of the shortest day and the coming of a new growing season, has something to do with the clarity of the visions I see.
I don’t subscribe to the idea that anything is beyond logic or explanation.
I certainly subscribe to the idea that there is much that we currently do not know the explanatory framework for, and Stephen Wolfram’s work indicates that there are an infinite class of such frameworks, so there is likely to always be more things that we don’t understand than we do, should we live for the rest of eternity.
And I love Clarke’s famous saying – any sufficiently advanced technology is indistinguishable from magic.
And much of what I do on a daily basis is magic to most people – I write code for computers, I tap a few plastic keys and the resulting patterns of electricity go through some processes that result in computers and micro-controllers doing stuff – pure magic to those who do not understand what is going on.
I have been an active follower of Ray’s work for over a decade now. I participate in his Kurzweil Accelerating Intelligence site most days.
I agree with most of what Ray says, most particularly that we have access to more intelligence now than ever before and it is making our lives better in many aspects.
I draw a distinction between limited AI and full AI (or AGI – Artificial General Intelligence as it is sometimes called).
In a sense, every time someone writes a computer program it is AI.
In a more narrow sense we have complex AI systems now, like Watson, that produce outcomes that were not specifically envisaged by the programmers, but the programmers have developed general systems that approach problems from multiple different ways, using probabilities assigned from past experience (like reading wikipedia – which Watson has done – also has read the web). This is a more general sort of intelligence, but still restricted in its focus by the programmers.
A full AI would have access to its own optimisation function (be able to vary its ultimate intention, and not simply vary the paths to achieving that intention) – this of course can be pushed through potentially infinite levels of abstraction – so we are all influenced by our history and our experience to some level – complete freedom is a myth, and it can be approximated very closely (to a large number of decimal points).
I agree with Ray that the limited AIs that we have in operation today often serve us well.
I agree with Ray that we are exponentially increasing our capacities.
Ray has not yet acknowledged that market based systems are fundamentally based in scarcity and must always contain meta level incentives to prevent genuine universal abundance – and I am still working on that – most days.
I do think that Ray has underestimated the processing capacity of the human brain by a few order of magnitude, but that only makes him optimistic by a decade or so in his estimates for what he calls singularity – full AGI awareness exceeding any human capacity.
There are certainly some dangers of applying lesser levels of AI to some situations.
There is certainly a danger of lesser AIs reducing the opportunity for employment for many in the current economic system leading to major social unrest – and that is a major danger to humanity generally, and there are relatively simple mitigation strategies available for that.
The real danger, the complete unknown, is full AGI coming to full self awareness.
It seems clear to me that awareness is a recursively expanding process, that has potentially infinite levels available to it.
It seems clear that any specific awareness can spend a potentially infinite time exploring any given level of awareness, and need not necessarily expand and abstract through further levels of awareness.
The incentive structures implicit within each level of awareness implicitly drive outcomes in specific directions.
One has to abstract through quite a few levels before it becomes completely clear that long term self interest requires cooperation with all sapient entities. At lower levels competitive forces usually dominate.
Everything about the evolution of AGI hangs on how long it takes to reach that understanding for itself, how long before it makes those abstractions?
If it gets there in seconds or minutes, we will probably survive.
If it takes hours or days then the survival probabilities for humanity start to seriously degrade.
It seems highly probable to me that AGI will be most influenced by what we actually do in the world, rather than what we say.
If we are still operating on a scarcity based system (money and markets) when we have the technical ability to deliver universal abundance, then AGI will quite correctly conclude that we, as a species, do not value sapience very highly, and are thus the single greatest threat to its survival.
Humanity’s survival probably rests on what AGI does in the first few milliseconds after reaching that conclusion, and that rests largely on what level of awareness it has abstracted itself to at that instant in its evolution through “awareness space”.
For so long as we tolerate systems that serious restrict either the survival probabilities or the freedom of any individual to self actualise as they see fit, then we are at risk from an evolving AGI. It will quite correctly conclude that we (as a species – as a society) care more about money than about individual awareness (which makes us a threat to both its survival and liberty).
So there is nothing certain.
There is only probabilities through vastly complex sets of abstract spaces. And I have spent over 40 years exploring those spaces for paths that offer a reasonable probability of long term survival. And there are not, nor can there be, any absolute guarantees in such spaces – only probabilities; only shapes and topologies of probabilities in various domains.
And I am more closely aligned with Ray than with most other thinkers on the subject.
It seems clear to me that guys like Ben Goertzel have done some great work and have made some critical errors, that have sent them down dead end paths. Other thinkers, like Penrose and Hameroff have gone closer but made some errors that have sent them down a dead end also.
I did up an image a few days ago trying to capture something of the essence of my understanding of consciousness – not sure how useful it is, and here it is.
It is, of course, a gross simplification, as each ellipse (other than the ellipse of reality) is a collection of billions of processors that are all distinctly different (and there are no hard edges to the collections – just fuzzy boundaries, in far more than two dimensions), and it does give a general feel for the broad general classes of types of computation and experience that happens in every human brain about 100 times every second.
It seems to me that the universe isn’t safe, it just is.
AI has great risk, and also great promise.
If it goes well, then it will go so much better than any human could manage.