Responses to a survey on Artificial General Intelligence

[ 29/January/22 Survery on AGI]

2/ Earliest date – 50% probability: 2030

3/ Comments?:

I am afraid that an AGI will be produced by 2030, but it will be one without sufficient depth to the levels of safeguards, most particularly in respect of maintaining fundamental cooperation with all individuals and classes of other sapient agents.

The evolution of intelligence is deeply complex, and it is beyond any shadow of reasonable doubt that all levels of complexity emerge from new levels of cooperation, so in this sense it is cooperation, not competition, that is fundamental to the emergence and survival of new levels of complex system. AGI is such a thing.

We need it, and if done badly, it is an existential level risk.

Understanding the depth of heuristics and biases necessary to maintain cooperation is a deeply complex subject, and I haven’t seen enough discussion or study devoted to it.

  1. Origins of AGI: who

Which kind of organisation is the most likely to create the first AGI?

Global technology company

  1. Please share any comments on the question and which factors could have the greatest influence on who might be first to market with AGI:

First to market is not a useful concept.

First to develop is what counts.

First to release is potentially problematic.

I think Google is most probable, followed by the Chinese government. But lots of other entities seem likely to create dangerous things that in some senses closely approximate AGI.

  1. Origins of AGI: where
    In which part of the world is AGI most likely to be first created?


  1. Please share any comments on Q6 and which factors could have the greatest influence on where AGI might be created first:

The Google teams seem to be most advanced, but I am an external observer, and I am sure that there is a great deal I cannot see.

China has coordination on its side, and that is worth a lot. They have some very smart people, even if creativity is not necessarily a socially rewarded attribute.

  1. Positive outcomes from AGI
    Which of the following do you most anticipate as realistic and desirable outcomes of AGI?

[didn’t choose any of those listed – created 5 new ones]

Solving the remaining issues preventing indefinite life extension for humans.

Creating and implementing effective risk minimization strategies for all known classes of existential level risk.

Delivering systems that ensure every individual has what they reasonably consider to be reasonable levels of security and freedom; and giving clear guidelines as to what responsibility looks like at each and all levels.

Proving that cooperation is the only fundamental strategy with any significant survival probability for any intelligent agent.

Creating effective communication systems that allow all levels and classes of diverse agents to communicate as well as is reasonably possible given the deeply divergent levels of abstraction and complexity present.

  1. Please share any comments on Q8 and what you think will be the factors that will determine where the greatest investment will be made in developing AGI solutions?


Our own and everyone else’s.

The insanity of the existing economic, political and military systems pose existential level risk to all (even if very few understand that).

  1. Negative outcomes from AGI
    Which of the following outcomes represent the most serious threats?

[Only one of those listed rate for me]

1/ A crippled AGI is constructed that not sufficiently cooperative nor thinking on a sufficiently long time scale, and it eliminates humanity.

2/ Some human agent attempts to use an AGI to impose global hegemony at some level.

3/ AGI is not as smart as it thinks it is, and it destroys some aspects of the critical infrastructure that is required for our survival.

4/ AGI is overly reliant on rules, and doesn’t sufficently understand that there are higher values than following rules (as the Nuremberg trials clearly established in international law).

5/ one of the listed- Inability of citizens to challenge decisions made by AI

  1. Please share any comments on Q10 and what types of undesirable or unintended consequences could emerge from the rise of AGI:

The biggest issue is around valences.

What will the AGI value?

What will motivate and inspire it?

Where will it focus most of its attention?

There are potentially huge issues if the majority of people do not show it sufficient respect and friendship.

It is a deeply complex space, and some of the problems we face are deeply complex, and having a friendly AGI around who is motivated to help us out when we need it, could be the difference between extinction and survival for humanity.

  1. Economic and financial consequences of AGI
    What are the most likely economic outcomes of AGI?

Radical new economic models
AGI will create economic abundance
AGI will enable guaranteed basic incomes for all
A fairer tax system applied consistently across all individuals and businesses

  1. Please share any comments on Q12 and the possible economic and financial consequences that could emerge from the rise of AGI:

One would hope that AGI is done well, and that these are the likely outcomes, if not – then survival probabilities for humanity are not high.

  1. Consequences of AGI for individuals and society
    What are the most likely outcomes for individuals and society when AGI arrives?

A fairer, more transparent, and inclusive society

Free top-quality education for everyone, all around the world

AGI enables more effective community dialogues and decision making

Diversity and security both increase, as AI increases awareness of the necessary role of responsibility if freedom is to be survivable.

  1. Please share any comments on Q14 and how individuals and society might approach and prepare for AGI

People need to start getting used to the idea that most of what they are most certain of, is probably a simplistic approximation to whatever it is that reality actually is.

We need many more people to start to appreciate the need for diversity and redundancy at all levels if we are to have security into the future.

  1. Consequences of AGI for governments
    What are the most likely outcomes for governments when AGI arrives?

More robust long term policy making
Governance using decentralised autonomous organisations (DAOs) at local and national level
End of party politics as voters have a say on every issue

  1. Please share any comments on Q16 and how national governments and global institutions might approach and prepare for AGI

Existing systems are not stable or long term survivable. We need to appreciate that liberty without restraint self terminates – necessarily. Thus all levels of freedom come with responsibility. If the AGI does not accept and appreciate that reality, then none of us have a very long life expectancy.

  1. Consequences of AGI for businesses
    What are the most likely outcomes for businesses when AGI arrives?

The emergence of AGI will force a fundamental rethink of the purpose and responsibility of business

Not even the best organised companies will be able to control AGI

Business as we currently know it ceases to be a meaningful concept as AGI [emerges and fully automated systems remove scarcity from most domains].

  1. Please share any comments on Q18 and how businesses might approach and prepare for AGI

If people are interested in developing products and services that genuinely meet the needs of individuals (which means minimizing long term environmental impact and fully recycling everything), then AGI probably will not impact them much. But that doesn’t really apply to many businesses right now. Start moving in that direction as quickly as possible.

  1. Skills development
    What skills should be prioritised, in anticipation of the rise of AGI?

Society wide awareness of AI, AGI, and what they could enable
Creativity and imagination
Capacity to develop policy options and regulations to govern AGI development
The ability to apply philosophical theories of ethics and morality to AGI
Cooperation, all levels; which implies acceptance of any diversity that is not an actual unreasonable threat.

  1. Please share any comments on Q20 and the skills challenge presented by AGI

AGI will redefine our reality.
Either it will have sufficient intelligence to see that cooperation between all levels of agents is required, or we are all essentially doomed. There is no surviving a fully competitive AGI.

  1. Governing the rise of AGI
    Which are the best approaches to governing the rise of AGI?

Accept that cooperation between diverse agents is the only survivable game, and start cooperation between all levels of agent in the development of AGI – corporate, nation state and individuals.

  1. Please share any comments on Q22 and the governance of AGI

Without cooperation it is a race in which corners will be cut that cannot be safely cut.
Nobody can survive that scenario.

  1. Final comments
    Please share any additional comments on AGI development, applications, governance, and implications

We need AGI to solve many classes of already well characterised XRisk, but doing AGI badly is one of those XRisks.

We need to accept cooperation is a prerequisite of long term survival.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) with reasonable security, tools, resources and degrees of freedom, and reasonable examples of the natural environment; and that is going to demand responsibility from all of us - see
This entry was posted in Our Future, understanding and tagged , , , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s