Global Catastrohic Risk Policy Contact page
[ 20/September/22 ]
I have had my Autistic Spectrum brain focused on this subject area since 1974 – since realising that indefinite life extension was a real possibility (as I finished undergrad biochem), and asking the question “what sorts of social, political, technological and strategic institutions are required to give potentially very long lived individuals a reasonable probability of doing so with reasonable degrees of freedom.
That involved self study into mathematics, logics, strategy, all branches of science and politics, and a fair amount of search new domain spaces.
My paradigms of useful approximations to whatever reality actually might be are well outside the norm, perhaps even unique.
I see the single biggest risk being the tendency of human neural networks to over simplify what is in fact irreducibly complex, and to be overly confident about simplistic strategic responses as a result. To some degree that is inevitable, and it needs to be actively countered.
Perhaps the single largest subcategory within that is the over simplification that evolution is all about competition, whereas a much more useful and accurate first order approximation is that all levels of evolved complex systems are based upon and reliant upon new levels of cooperation, and any level of competition that fails to be responsible for the necessary cooperation poses existential level risk to that level of complexity (which is recursively true through multiple levels of complexity – perhaps as many as 20, and at least as many as 15).
Many of strategies in the public database seem dangerously simplistic to me.
I am interested in looking at the full database.
Are you interested in serious critique?
[No response as of 30 Nov 2022]