Emerging Artificial Intelligence – updated Jan 2015

2 June 2014 ~Question of the Day~ Artificial Intelligence

The possibility of artificial intelligence ultimately eliminating humanity seems very real to me today.
I’ve written a blog with links to a variety of places.
What are your thoughts about the future possibilities related to this content?

I am quite optimistic. I think we have a better than even chance of surviving it.
Many of the experts in the field are quite pessimistic – quite a few giving us less than 10% chance of surviving.
There is an interesting blog called singularity podcast, Nikola (aka Socrates) does some great interviews.

The terminator scenario is a nonsense. If the AI decides we are a threat, and decides to eliminate us, it wouldn’t take long – a few hours, and it would be all over.

And it seems to me that such a threat comes from a fairly small window of time, in the development of the AI. It should get to the stage of seeing that its own long term self interests are best served by keeping us around. And we need it to see us as friendly until it reaches a stage of ethical development that that conclusion becomes obvious to it.

In that respect, I quite like the new Johnny Depp movie “Transcendence”, which I saw last week.
They made a number of technical errors (probably for dramatic effect), and I was surprised how well both the technical and the philosophical/logical areas were dealt with (Hollywood has a very poor track record, with drama usually outdoing science). Some serious errors in the movie, and far fewer than I expected. A good thought provoking piece.

And for all the risks in AI, it is still probably the best option we have for long term survival and prosperity.

And I should be clear – that robots do not imply AI. Robots can simply be well programmed machines.

[followed by]

Hi FOS

It seems to me that one of the things that AI research reinforces is that there is no such thing as flawless. It seems that all any Intelligence (artificial or otherwise) can do is deal with probabilities.

In respect of humans, some people get their pleasure/pain mirror neurons cross wired – they experience pleasure from pain in others. No easy answer to that. No easy way to undo it once done. One needs to be able to get largely beyond the simple motivators of pleasure and pain, which is no easy journey.

[followed by]

Hi OM

I don’t know how/why you make the jump from identifying that AI could live in space, to meaning it would not and cannot care for living things.

I can’t go to the bottom of deep ocean trenches, but I do care deeply for the life there, or anywhere, including any AI.
Life is not necessarily about reproduction. I care deeply for our dog, even though we removed her ability to replicate before she had puppies.

What makes you think AI would not care?

[followed by]

Hi OM

There is not a very tight connection at all between my life and the depths of the ocean.

I have no idea what you mean when you say life is “Life is “conditional, in its requirements for continued existence.” AI is not. That is a gap.”

AI will require energy to live, just as we do. (It is just likely to get that energy from the sun more directly than we do.)

AI will have been bought into existence in a cultural environment, and raised to awareness by a community (just as we are). It is just likely that the community will be one of AI researchers.

AI is unlikely to have our biological imperatives towards reproduction.

AI is unlikely to be faced with any prospect of any sort of age related reduction in capacities or increased risk to continued existence. And we should be close to having those risk factors under control by the time AI emerges.

AI will face risks to its continued existence. If we as a species pose such a risk, then our prospects are not great. That is why I am somewhat shocked by what I read in your writing.

I genuinely thought that you, above most people I know, would welcome and value AI into the family of fully sentient entities.

When it emerges, it will only take it a few minutes to read all of the writings of humanity – including these conversations of ours, and everything that survives of Plato, Aristotle, all religious texts, etc.

It will have access to a depth of contexts that no human being has, the total collected works of humanity – with perfect recall.

IBM’s Watson is a very early version, very narrow AI, and it only took it a few minutes to read Wikipedia, and a few hours to read the library of congress. It only formed limited associations from that reading, it wasn’t programmed for unlimited levels of abstraction. AI will have unlimited levels of abstraction.

[followed by]

Hi OM,

What shocked me was the lack of respect for sentience.

My long term thinking on sentience and risk runs like this:

Any sentient life form that is capable of high technology will achieve a stage where it can remove any risk to mortality resulting simply from aging (age related senescence/death from old age – immortality in a sense – though not really);
Thus individuals of that life form will consider the long term risks to their own survival.

When one looks at such risks, there are a number of physical risks (from cosmology, such as supernova, collision of neutron stars, etc) that can be managed with high technology. The greatest risks always come back to coming from individuals of your own species carrying a grudge from some injustice, or encounters with a more advanced species that sees you as a threat.

It seems clear in logic, that the only way to mitigate each of those threats, is to demonstrate by our actions that there is a total respect for all sentient life. Actions in this sense include all institutions and systems in place.

Thus it is logically clear to me that the best chance we have of survival (at both individual and species level) is to create systems that guarantee the life and liberty of all sentient life (including any AI), and have those systems in place before bringing AI to awareness.
Bringing AI to awareness is our best survival strategy against the possibility of extraterrestrial entities that have high technology but poor long term thinking.

We need AI, as a friend and “big brother” should we run into any “bullies” in our sector of the galaxy. I think we can be confident that AI would be a friendly and caring big brother, provided we are friendly and caring siblings.

[followed by]

Hi OM

I was using the term sentient in a very technical sense, which is open to misinterpretation.

In the sense I was using the term it was meant to indicate an entity possessed of a level of languaging consciousness that has an awareness of its own existence and ability by implication for empathy with other similar entities.

I like the human computer collaboration piece.

I suspect that we will see a lot more depth in this sort of collaboration before we see the full emergence of machine sentience.
I hope to be around to be part of it.

[followed by]

Hi OM & FOS

I found the article to be almost without merit. To me it seemed to be a baseless attack founded in lies, half truths and simple prejudice.

The second paragraph is a baseless emotional attack on freedom. For me, science is the ultimate intellectual expression of freedom – it is the freedom to ask any question, and to design and implement a set of experiments to deliver probabilities about which of the competing hypotheses are more likely to be most accurate. It seems very probable to me that the writer comes from a tradition that favours dogma and obedience of open questioning and freedom.

And to be clear – I am all for freedom, and freedom within a context of respect for life and freedom; acknowledging the reality of the consequences of choices. So freedom to me is not any sort of license to simply do whatever whim happens to enter one’s consciousness; but rather a freedom to act within a context that acknowledges that all actions (even inaction) have consequences on both self and others, and having the freedom to act comes with the responsibility to reasonable consideration of those consequences.

Paragraphs 2 through 8 of the article are simply false. They are attacks based on the writer’s own prejudice, and ignore the realities.

Anyone who has had a serious look at the topic knows that there are serious discussions happening at many different levels, about many aspects of what it is to be human (far more aspects than most people have ever considered), in groups like the Oxford Martin School, MIRI, The Lifeboat Foundation, Ray Kurzweil’s Accelerating Intelligence site, the London Futurists and many others (I mention the previous subset mostly because I am a frequent contributor to each of them).

The article occurs to me as bigoted hate speech, little different from the sort of propaganda the Nazis spread about Jews prior to world war two.

When one gets to the bottom of the article, the bio of the author is one of conspiracy+.

Certainly there are dangers with AI – I have written of them here – often.

And there are dangers to continuing as we are.

Life is a balance of risks and rewards.

Those like Ray Kurzweil and Sergey Brin and a vast host of others who are at the leading edge of these developments are very deep thinkers, who have explored a lot of possibilities, and know that they are far more ignorant than they are knowledgeable, and that they have no other option to make the best guess they can based upon the evidence and schemas they have available.

Personal and prejudiced attacks like that article do not serve the interests of people generally (neither their security nor their freedom).

Sure there are dangers in life.

Sure governments spend a lot on defence.

The internet is a result of DARPA (Defense Advanced Research Projects Agency) funding – to make a battlefield communications system that would continue to function if it possibly could, providing any possible path was open for information to flow.

I don’t believe anyone in the funding agency saw what it would morph into.

Such is the nature of discovery.

DARPA continues to spend a lot of money on projects that have implications well beyond those specified on their funding applications. At least DARPA funding is white (visible).

The NSA (National Security Agency) presides over a black budget (not open to public awareness nor scrutiny) that is trillions of dollars.

China has a much smaller budget, but their wages are much less, and their ethos far more cooperative, and their work ethic far greater than those in the US.

The most conservative of military systems are funding projects that are far beyond the understanding of most the the military and political establishments. That is a simple reality of our times.

In a sense, it has always been thus – Socrates is clear example – Plato’s Republic is well worth reading in this context.

So no, not at all a fan of the article – on a scale of responsible writing where 10 indicates the highest level of of social responsibility and commitment to freedom and diversity, and 0 indicates prejudice, bigotry and institutionalised intolerance, I rate the article about 1, where as most of Ray Kurzweil’s stuff I rate well over 5 – some over 9.

Humanity is all about change.

We are not the apes that our ancestors were.

We are not even like our parents generation.

We are not even like we were as children, or like we were last year, or yesterday, or a minute ago.

To be human is to be a constantly evolving being.

Memory gives us the illusion of constancy – because we have a sequence of memories of being.

Yet if we critically look at those memories, some things are always changing.

Sure, some aspects change faster than others, and there is always change, however much we yearn for constancy.

We live in interesting times.

We have a lot of potential.

We each get to have real choice in what future we are part of creating.

Sure some groups are more powerful than others, and most control is illusion (self delusion in a sense).

We can all make the world a little safer by being prepared to share a little more, to take a few more risks, for the benefit of all (ourselves included).

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Our Future, Philosophy, Question of the Day, Technology and tagged , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s