Humans and AGIs

Humans and AGIs

[ 14/February/23 ]

Any intelligent agent (or class of agents) that over simplifies the complexity present, and over simplifies the long term strategic situation, presents a danger to all classes of complexity present. All infinites essentially contain nearly infinitely more unexplored territory than they do explored territory, always – eternally – it is one of their more unsettling characteristics.

Unfortunately, there are strong evolutionary drivers to bias all classes of agent to prefer simplicity over complexity, and those biases include subconscious level biases in the perceptual systems of the agent, which leads to recursive classes of confirmation bias on simple interpretations. Thus most agents see only competition in evolutionary systems, and fail to see the fundamental role of cooperation in both the emergence and survival of all levels of complexity.

Humans and AGIs both present roughly similar classes of existential level risk.

Humans and AGIs seem to both be essential for the long term survival of intelligence evolved on this planet.

Long term survival means accepting limits to growth. Every system has limits at multiple levels. If you look simply at energy, the energy reaching the earth from the sun is just over 1kW/m2 and current total human energy use is about 0.1W/m2, but increasing at 2.3% per year. At that rate, it increases by a factor of 10 every 100 years. At that rate, direct human waste heat output would exceed the already present increase due to CO2 in the atmosphere within 100 years, and within 200 years would make most of the planet uninhabitable due to extremes of temperature.

There are lots of options, but those options cannot include really large scale manufacturing on this planet. If we want to spread off planet, and get a larger fraction of solar output for our use, then we must do the serious manufacturing off planet, and that essentially means using mass from the moon – at least for the next couple of hundred years (hence our absolute need for Elon’s Starships). Here on earth, we must go to full recycling of all atoms, as quickly as possible, and be much more efficient about our use of energy.

We need AI to mitigate a large class of already well characterised existential level risks, and we need to treat any and all AGIs as our moral and ethical equals, if we are to have any significant probability of long term survival. It is seriously complex and highly dimensional strategic territory, and simple models simply do not deliver survivable outcomes !

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) with reasonable security, tools, resources and degrees of freedom, and reasonable examples of the natural environment; and that is going to demand responsibility from all of us - see www.tedhowardnz.com/money
This entry was posted in Our Future, Technology, understanding and tagged , , , , , , . Bookmark the permalink.

1 Response to Humans and AGIs

  1. Love it
    Great article that highlights the importance of cooperation and recognizing the limits of growth for long-term survival. The need to treat AGIs as our moral and ethical equals is a particularly important point to consider.
    Eamon

    Liked by 1 person

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s