Valence and AIs

How understanding valence could help make future AIs safer

It seems clear that the complexity is “off the charts”.

It seems that reward or value for humans derives from multiple overlapping and intersecting and deeply nested sets of survival heuristics encoded at both genetic and mimetic levels (both cultural and individual).

There are not, nor can there be, any absolute guarantees in life. And we can certainly move a lot of probabilities a long way from where they are right now.

While we hold on to markets, and exchange values, we are in grave existential risk territory (with or without AGI) – see for my latest thoughts on the issue – there are many others on that site.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see
This entry was posted in Technology, understanding and tagged , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s