[ 12/2/20 ]
Not really a useful question.
The answer is necessarily – of course – many different sorts of threat exist.
The far more useful question is, what seems to be the balance between threats and opportunities posed by AI?
It seems clear to me, beyond all shadow of reasonable doubt, that AI is required to solve many sets of problems that already pose severe existential risk to humanity.
And, yes, of course there are risks. Anything has risks. Cars kill a lot of people, yet they are very useful and probably save many more people.
Certainly there are many very valid concerns that need to be explored with AI. And overall, to me, the benefits clearly outweigh the risks, for all the real risks that exist.