A comment on You tube
At About 1:02:10 Claudia makes the statement “In public decision making, you don’t get far without having to account for the fact that the cause of the risk is usually a benefit to somebody else. In the area of AI the drivers of applications are likely to be economic benefits, and when the down side is existential risk there is no benefit that outweighs it.”
That statement makes the assumption that there is no reduction of existential risk in AI – or looked at another way:
What is the likely existential risk of a future including AI vs the existential risk of a future without AI?
There can be no clear answer to that question, and my feeling for it, as someone who has been interested in existential risk and AI for 50 years, is that the existential risk of reasonable probability futures without AI are exponentially larger than ones with AI.
About 1:17:40 Rowan in his reply to two lengthy and insightful questions makes the statement: “Clearly, since 2008, there is a remarkable focus on trying to create a stable international financial system.”
To me that is clearly an impossibility.
The really deep issue resides in markets themselves, and is really expressed in a value equation:
[Market Value] = (Instantaneous Market Measure across current participants) [Individual Value] x [Scarcity]
In a sense it is simply a restatement of supply and demand.
What this restatement makes clear, is that universal supply (zero scarcity) will always deliver zero market value.
The necessary outcome of that is that markets require poverty to function.
In an age without automation that was ethically tenable, and mathematically semi stable.
In an age where full automation is possible, it is not simply ethically untenable, it is mathematically self destructive when one factors in the very complex set of equations that include the human tendency to punish injustice. When our existence is fundamentally reliant on a system that is so fundamentally unjust, that is, in and of itself, an untenable existential risk to everyone.
So I see a crisis for markets – any form of markets. I see no mathematical way of avoiding it within a market context, if markets are a significant factor in survival.
Purely on a survival perspective, we have no option but to change the system – evolve it to something more secure (where security is a probability function).