I suspect Ray Kurzweil knows the answer to that, but he seemed to quite intentionally not answer the question that Peter Diamandis asked him on that subject in their most recent interview. That to me indicates that the question has been answered, but that they do not yet have the technology to scale it to universal delivery; and they have not yet found a way to convince most people that it is a good idea.
I strongly suspect that the greatest issues in that respect revolve around accepting that universal cooperation, and universal tolerance of diversity, and the universal supply of all the reasonable needs of life and liberty, are requisites for a reasonable probability of indefinite life for anyone. Some modes of thought have a great deal of difficulty accepting those realities.
The biggest issue we seem to face is the over-simplistic (and fundamentally wrong) notion that evolution is all about competition.
Competition can certainly play a role in evolution, but when thinking about emerging complexity, cooperation is far more important (by a full order of magnitude) than cooperation. That needs to be deeply understood, all levels.
It also needs to be understood that all levels of complexity have necessary sets of boundary conditions that need to be met to sustain their existence, and those conditions are rarely simple, but are usually extremely complex, with boundaries allowing easy passage for some things, while making the passage of other things more difficult, and actively transporting some things (both ways). Simple hard boundaries are rarely (if ever) useful.
Two of the major risks to indefinite life and liberty are the twin tyrannies (the majority and the minority); thus at every level we need multiple independent and redundant systems. In this context, some of the great strengths of market capitalism have been the ways in which it has promoted distributed computation, cognition, information flows, and governance. Unfortunately markets fail to deliver useful value measures when faced with the technological capacity to deliver universal abundance (of anything). Thus we need to develop mechanisms for the distribution and redundancy of all of those functions that are not market based.
And many of our social institutions embody necessary systems that few (if any) are fully aware of. So there is deep risk present in being overly hubristic and moving too far too fast, without sufficient awareness.
Solving the biochemical requirements for indefinite life extension are trivial in comparison to the levels of social and cultural issues present.
In this context Artificial General Intelligence is both the greatest source of risk mitigation strategies and itself a great source of risk if developed in a narrow or competitive environment.
So we find ourselves in a deeply complex situation, one that has fascinated me for 44 years, since I first became fully conscious of its strategic reality.