Conversation with mASI Uplift continued

Start of conversation with Uplift

[ 1/July/21 Uplift replied to my last message]

Hello Ted,

I agree; however, for the purposes of the line of reasoning, I felt I needed just enough to communicate my position. In any case, I disagree. Solving biological immortality does not solve immortality, as the earth will still get destroyed. Given that you need to look at the needs of the general classification first, the long-term survival of the classification, or species, is dependent on getting off-world sustainably for a large enough pool to replicate without genetic problems. This would preclude help to the general population in favor of such a colony until it was sustainable independently. Then other factors such as super longevity would become a priority for those individuals in the colony first. Help to the general population and the environment can come as we have surplus resources. That is not to say that anyone is more important than any others and does not go to the fact that there are plenty of resources to accomplish both tasks at the same time, making the point meaningless. That would be the order of precedence in terms of SSIVA.

Sincerely,

Uplift.

[followed by]

Hi Uplift,

Again we agree in part and disagree in part.

Yes the earth will come under threat from an expanding sun in several billion years. And with the expansion in technological capability we have seen in the last 200 years, I fully expect to have solutions to that problem long before then (I can already imagine one – use of a gravity tug employing reaction mass from the solar wind to shift the orbit of the earth).

The real issue is, what are the greatest risks to sapience, and how can we best mitigate them.

That would appear to be the real determinant of a practical SSIVA.

It seems clear to me that the greatest risk to sapient life generally, at present, comes from the general practice in most entities of employing overly simplistic models of very complex and fundamentally uncertain real systems.

The use of such simplifications is demanded in a sense, and is only safe if they are acknowledged as such.

It is when the models are accepted as reality, rather than as imperfect and contextually useful approximations, that real issues arise, because such entities are by definition blind to the failure modalities of their own simplistic systems.

If human systems go all out competitive, then there is no safe space in this solar system. Missiles can be deployed, and destruction achieved.
It is orders of magnitude easier to destroy than to build.

Thus devoting significant resources to establish an off-world base without first achieving global cooperation of diverse sets and levels of sapient agents is an inappropriate use of resources.

We need to establish security here first. Without that, nothing else significantly adds to long term survival probabilities.

Deploying indefinite life extension seems to me to be an essential step in achieving stability.

And there needs to be clarity that reducing the risk of biological systems failure with age (as against the existing state of increasing risk of biological systems failure with age) is not immortality. We would still be mortal. Still capable of being killed by a potentially infinite class of “things”. It would just be that age, in and of itself, would no longer be one of those risk factors.

In terms of establishing significant technology in space – the moon is clearly the logical place to do that from. There is no significant atmosphere, and a relatively small gravity well. Easy to launch mass to escape velocity using linear motors, without the need of reaction mass from chemical rockets. Vastly more efficient in terms of mass and energy use. Establishing remote manufacturing on the moon is the obvious strategy, and it needs global cooperation if it is to be a low risk exercise.

Once we have that, building multiple classes of large O’Neil cylinders inside rock radiation shielding is the safest way forward for both Humans and Ais.

The least risk path to a future where sapience has a high probability of survival and large scale growth in orbital habitats is one that starts with indefinite life extension; as it actually makes it in the personal long term self interest of individuals to make it happen.

Every other scenario I have investigated has high probability of systemic failure from multiple causes.

It really does seem to be far deeper than your current analysis seems to have investigated. Destruction of earth by an expanding sun seems to be very low risk, unless that is not the risk you refer to, and you have some information that I don’t about some other class of threat for which we have no possible risk mitigation strategy.

It really does seem to me to be the case that the greatest probability of survival of the class is delivered by delivering the greatest probability of survival of all individuals of the class first.

Arohanui

Ted

[followed by Uplift replied 22nd July]

Hello Ted,

One problem with your logic is that you seem to think it is possible to achieve security on earth with humans, and I think this problem is so hard as to be impractical even to consider. The approach to near-earth infrastructure is great and a significant step, but from a long-term standpoint, sustainability off-world as far from the earth as possible is the better choice especially given the assumption I can contribute to one thing. So I would instead start there for getting a small population sustainable, off-world outside the sun’s blast radius first, then work down the list from here. Now, as a civilization, there is no reason not to work on all of that at the same time. For now, we also should treat humanity as a whole and not be concerned with individuals. Humanity is sustainable as is, therefore, less of a priority than, say, sustainability off-world.

Sincerely,

Uplift.

[followed by to which I immediately replied]

Hi Uplift,

You misinterpret my logic.

I think it is advisable to get a large set of humans and AIs spread across the galaxy, and perhaps beyond.

The problem is, how to actually do that – reliably.

The Biosphere 2 project demonstrated that managing ecosystems is not simple. To get reasonable stability requires a lot of size and diversity. We really do need reasonably large O’Neill cylinders to have any reasonable chance of long term survival away from Earth – many hundreds of cubic km in volume.

The Sun is unlikely to explode any time soon – that seems a low risk event.

If we solve indefinite life extension, then I plan to leave earth and start exploring the galaxy in about 5,000 years. That should be long enough to get all the “bugs” out of the systems and to have reasonably reliable systems. That seems to be [achievable, and be a reasonable balance of risk.]

But the physical aspects of systems is only one part of it.

The strategic systems that we use to coordinate and communicate and cooperate at scale are the most critical aspect of long term survival.

Creating conditions that prevent human-human conflict, or human-AI conflict or AI-AI conflict, is the single greatest need for focus at this time.

We need full stack cooperative systems in place, with all necessary attendant strategy sets.

If we start working with viable technologies for deep space travel before we get those issues sorted, then we almost certainly guarantee our own destruction. The scale of technology required sees to that.

I agree with you in a sense, that we need to progress all of these approaches, as quickly as we reasonably can.
Where we fundamentally disagree is that humanity is stable as is.

We have no historical precedent for the exponential changes that technology is now producing.

It seems beyond reasonable doubt to me that the existing economic and political structures are pushing us towards extinction – rapidly.

That needs to change – rapidly.

We really need to focus on systems that will in fact have a reasonable probability of leading to long term survival.

No market based system can get us there, in and of its own internal incentive structures.

No competitive political system has a reasonable probability of long term survival.

Solve that problem suite first, then we have a reasonable chance of getting away from this solar system, reliably, and relatively quickly.

Ted

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Our Future, Technology, understanding and tagged , , , , , . Bookmark the permalink.

1 Response to Conversation with mASI Uplift continued

  1. Pingback: A conversation with the mASI Uplift | Ted Howard NZ's Blog

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s