If doing something good for others makes us feel good, can there ever be such a thing as pure altruism?
I think it depends very much on definitions.
I’m inclined to agree with Bhatta that the idea of pure altruism doesn’t seem possible, and it does seem possible to align values such that one’s own long term interests align with those of everyone else – which seems to be about as close as it is possible for us to approximate such a thing.
All of my actions must have a value for me, even if that value is neither immediate nor obvious to anyone else.
I have a certain mastery of delayed gratification, and not across all domains.
So yes – very interesting area to explore, as I have many times in the past few decades.
It would seem that in both logic and in decision or control theory, there must be some benefit at some level, some target that is “desired” at some conscious or subconscious level, in order for us to act in a way that produces that outcome (unless it is a completely random accident – but then how could we call that altruism).
So in terms of decision theory, the question very much depends on the definitions one adopts of “altruism” and “pure”, and how one’s definition of altruism implicitly involves the notion of “intention” or “will” and the depth of understanding one brings to those terms.
To me, all such things resolve back to probabilities of influence in action between different levels of systems, and to result in the sort of systems that have produced this set of words, seems to require about 20 level of complex systems with levels of influence between systems all the way up and down the stack, as well as laterally between the systems in any particular level of the stack. And at most levels in the stack that involves many billions of individual systems. That seems to be what it is to be an embodied human being – complexity beyond any hope of anything other that the broadest of brush stroke sketches – ever. The numbers involved are beyond any hope of conscious level comprehension (in terms of the details, rather than in general terms of the sort of systems involved).
And within all that complexity, the systems do seem to follow sets of probabilistic rules that do involve what might be considered targets and feedback and projections and differentials etc; that at the highest levels do actually seem to be conscious level choices (at least for the most part, acknowledging all the levels of probabilistic influences in their levels of construction). The stack thus going from “mostly conscious” at the top, down to “mostly unconscious” at the bottom.
And exactly where one tries to draw hard boundaries in such complexity is always something of an over simplistic myth.
Differential survival of self replicating systems in several different domains, seems to have resulted in the complexity of hardware and software systems that deliver the experiential existence that each of us seem to experience.
In the control sense, it all comes back to selection of that which best survives, long term, over the greatest array of contexts.
We seem to be on the cusp of being able to see that our individual interests are intimately linked to the interests of all other people and all other life forms, and that the greatest probability of benefit for any of us in the long term requires guaranteeing life and liberty to all, at least to a level that we each consider reasonable (and that will encompass quite a spectrum, it is not a uniform function). Fairness and equality are very different notions.
So it is a very complex question.