To me, the history of philosophy is interesting, as it seems to show an exploration of many different domains.
It seems that as individuals we come to consciousness whenever we do, and we find ourselves to be embodied, to be acting in what seems to be a reality, to be using language, to be having thoughts and feelings. We start to notice that we can predict some things with relative ease and good reliability, and other things not so much.
We find ourselves to be a part of a culture, that comes with sets of stories about what happened in the past, how relationships are, how they should be, what might happen in the future if we don’t think and act in certain ways.
Some people accept the implicit truth of such stories, some of us choose to question assumptions and evidence at different levels.
It seems that the stories of our deep ancestors were framed in the experience sets common to them – such that the complexities of their deep past were explained in terms of agents like themselves, yet bigger and stronger – gods.
As we started to distinguish more things, form bigger groups, we developed needs to record and transmit information about things more accurately over time and space than relying on human memory – we developed written language.
This seems like it might have started from a simple need for accounting, marks in clay tablets.
Initially counting systems seemed to be based around our physiology – 10 digits, or base 6 (5 digits plus a fist), allowing two hands to count to 36 rather than 10.
Once someone thought to add a symbol for the empty set, 0, mathematics as we know it was able to flourish in symbolic form.
Some people started to explore the infinities of mathematics, and the were drawn to the certainty of the relationships embodied in abstract mathematical forms. The simple geometry of Pythagoras was taken by many to embody the notion of absolute truth. People looked to explain things in terms of such simple mathematical *Perfection*.
Then along came people like Leibniz and Newton who gave us calculus and opened the description of more complex mathematical forms, so the ellipse came to replace the circle in astronomical perfection.
But as the experimentalists developed ever better abilities to measure things, the imperfections in *Perfection*, became ever more obvious.
So people developed ever more complex mathematical forms, to eliminate the imperfections.
Some people explored the ideas of relationships in their own rights, pursuing them into the realms of number theory, set theory, topology, uncertainty, statistics, logic, games theory, and into what clearly becomes infinitely recursive levels of strategy and uncertainty in action. And every time a new theory was found to explain a set of observations, a new set of observations was found that invalidated the absolute *Truth* of the past explanatory system, showing it to be but a useful approximation to something, that was useful in a context.
So now we have ideas from information theory, network theory, evolutionary theory, all embodied in highly abstract spaces, where each level of abstraction allows for infinite sets of infinities.
And in all of this, many people seem to still hold on to the ancient idea of *Truth*, rather than accepting that the evidence we have today seems to overwhelmingly point to fundamental uncertainty at the level of the specific, yet that allows us to develop profound levels of confidence at the levels of large sets or aggregates.
And some people have taken the insights coming from profoundly abstract thought and applied it in practice to the understanding of us, our modes of understanding, our levels of structure and pattern that deliver this embodied ability to model possibility, and to encode information about such projections into arrangements of symbols that have some non-zero probability of being interpreted as something like the patterns in the mind of the writer.
The great names of recent times might be names like John von Neumann, Alan Turing, Alan Guth, Richard Dawkins, David Snowden, Jordan Peterson, Ray Kurzweil, etc.
We find ourselves in profoundly uncertain times, existing in populations of people with profoundly different levels of awareness of different domains, like the domains of abstract understanding, the domains of self awareness, the domains of self mastery, the domains of integrity, the domains of economics, politics, technology, biology, strategy, law, planning, implementation, the domains of delayed gratification in the pursuit of extremely long term self interest – where such self interest becomes almost indistinguishable from altruism.
We live in a time of exponential change across a range of domains that very few people have any awareness of, much less interest in.
These domains, like all domains, hold the potential for both profound benefit and profound risk.
Much of the thinking that now dominates our dominant cultural structures seems to be based around simplistic formulations that worked near enough to be useful in our past, but are now moving into profoundly dangerous strategic territory that is now delivering existential level risk.
Two major errors are obvious.
One is the notion of evolution being based on competition. That is clearly false. Evolution can involve any level of mix of competitive or cooperative strategies (across any domain of time, space, distinction or abstraction). One thing is clear – competitive strategies tend to dominate in domains of scarcity, and tend to lead to simplicity and increasing risk. If there is sufficient resources for all, then cooperative strategies can lead to expanding complexity and security.
That segues into the second major error, that markets are a useful measure of value. Markets require an aspect of scarcity to deliver value. Anything universally abundant has zero market value. Thus markets are internally incentivised to destroy any universal abundance, and to make it a marketable scarcity. Our current laws around Intellectual Property and precisely that. Thus markets tend to destroy security in the presence of full automation.
Our existing economic systems are fundamentally competitive, and fundamentally based in scarcity, and pose severe risk in our current environment of exponentially expanding automation that is capable of delivering universal abundance of a large and rapidly growing set of goods and services.
So philosophy, in as much as remains captured by ideas from our past and is not pointing clearly to the dangers in our present and probable futures – appears to me to be part of the problem, rather than part of the solution.
The tree example is a good one.
A tree left alone, will tend to grow to a height that it overshadows the grasses etc, and will not grow any higher than is required by the needs of its basic genetic programming, thus reducing the cost of transporting material to the top branches, and reducing the risk of damage by high winds etc.
Trees competing for light with other trees grow to great height, because of such competition, but in doing so the increased metabolic cost of lifting water, the increase risk from earthquake and wind, etc, all increase the probability of catastrophic failure in the longer term, but the risks of the moment dominate.
So if you plant trees at a wide spacing, they tend to grow low, and be resistant to earthquake, storm, drought, plague etc.
If you plant them close, then they become more vulnerable to failure from a wide array of risk modalities (climatic, geological, biological).
I own a small forestry block, and have for several decades been an acute observer of failure modalities in forestry systems along the 300 miles or so of the Island I live on. I am also engaged in the highest levels of biosecurity and biodiversity strategy in this country.
And neither of those senses was what I was pointing to in my initial assertion.
What I was pointing to was the more abstract realm of long term evolutionary strategies.
If you look at evolution on the very long term, then it is clear that nearly pure competitive systems maintained the simplicity of bacteria for several billion years. Then around a billion years ago, in a very unusual set of circumstances, sets of bacteria got together to form the colonial organism that was the first eukaryotic cell. That strategic level of cooperation (which seems logically that it must have occurred in a very uncommon situation where the risks to survival came far more from factors outside of other members of the population of things you could breed with than from within) allowed for a flowering of complexity that wasn’t possible from bacteria acting solely as individuals. It led in a sense to a loss of individuality of some classes of individuals but to the emergence of new levels of complexity in the new level of individuality so created.
Evolution is not necessarily about competition.
Evolution is about differential survival.
Often competition is a major factor in differential survival, and not always.
Sometimes cooperation is the major factor in differential survival.
In conditions where cooperation dominates, we observe expanding levels of complexity. Without it, the drive is to optimise for simplicity.
That seems to clearly be recursively applicable across all levels of abstraction, all levels of system interaction.
We see in ascidians (sea squirts) that once they attach to a substrate, their brains are reabsorbed and the metabolic energy once used by brains in moving through a complex world, is given over to reproduction in a static world of an individual fixed to a substrate.
All this leads to very complex multidimensional probability spaces when considering the multiple levels of simultaneously instantiated strategy in action that seem to be embodied in all the levels of complexity instantiated on this planet currently (some 20+ levels in the systems I am employing in this specific conversation).
It is a recursive notion.
It takes time to push it deeply to high orders of abstraction, such that it become intuitively useful in conversations such as this.
Just to get to the level of cell requires several levels of cooperation.
It is a long conversation.
The biochemical and systemic aspects of evolution have fascinated me for over 50 years, and I have followed developments with interest.
Yes – bacterial cells can cooperate in some contexts, and in a deep sense we are exactly that – many levels of it.
And it is the levels thing that is important.
Bacteria need to cooperate to the sorts of levels that instantiates us for literature to emerge as a systemic property.
It is profoundly complex, and there is also some profound systemic simplicity in the major trends – as with most things – all probability and contextually based (of course).
And of course, anything to do with reality has an aspect of conjecture, and the more deeply separated in time the greater that must be (and we’ve been down that bunny hole a few times in the last few months).
Thinking “intention to survive” isn’t the way to think about it.
Just think, “what survived”.
Then consider all the different types of context, and the risk/reward ratio of each sort of strategy in each sort of context. Then think about the sorts of mechanisms for context sensitive strategy selection, and the probability that each of those mechanisms will select a strategy appropriate to context.
When you start to build that sort of probability landscape, then you start to get “a feel” for the sorts of strategic associations that you are likely to find present at any level of evolved systems.
We didn’t have the option of “intending to survive” until very recently in our evolutionary history, and in a sense that intention is only a “thin skin” overlain on all the other many levels of genetic and mimetic systems within us. To the degree that we can recognise the influence of those older systems, then to that degree we gain some influence over their expression probabilities.
Yes certainly – we inherit many levels of behaviour, genetic and cultural (in the widest sense of cultural – including intellectual culture), and that is essential in a sense, none of us is smart enough to entirely create language and culture on our own – we need the combined intelligence of all those who have gone before to pick and choose from, then add our little bit of creativity to that mix.
Yes, certainly – many of those systems are not always triggered in appropriate contexts, and lead to less than optimal outcomes.
In a very reals sense, that is how evolution has always worked – create a massive diversity, and see what survives best in each generation, repeat.
The very real issue now, is that now we have the tools to make mistakes that eliminate all diversity – not just our tribe, or our district, but the entire planet load of human diversity. Those are existential level risks, and there are far too many of them. We don’t have the option of breeding from whatever survives.
These are not trivial issues.
They are deeply systemic issues, buried deeply in the implicit assumptions that very few people examine or think about.
I have two core values – individual life, and individual liberty, and I acknowledge the necessity of those being expressed responsibly within ecological and social contexts – so not naive freedom, but responsible freedom, that isn’t an unreasonable risk to others (and yes the definition of unreasonable is intentionally fuzzy, and must involve sets of conversations and negotiations).
In today’s context, I see that the scarcity based value measure created by markets is an existential level risk – in the planning sense rather than the distribution sense. A useful transition strategy would seem to be some sort of Universal Basic Income, and it is only a transition strategy, not a stable final strategy.
I see the twin tyrannies (majority and minority) as high risk issues, and the only effective countermeasure is distributed governance, all levels, and distributed public information – a lack of privacy in public spaces.
So I understand how evolution has worked in the past, and that isn’t stable into our future.
We need to step the game up a couple of levels.
So yes – intention to survive is relatively recent, and not all that common across all levels of society – most operate on older, less intentional systems most of the time, and all of us do so some of the time (no matter how aware we think ourselves to be). It is an extremely complex problem space, and any form of centralisation is by definition high risk. There must be massive decentralisation, distributed decision making, distributed risk assessment and mitigation strategies – all levels. And I acknowledge the role that markets have played in doing that (as per von Hayek et al), and we need alternatives with different incentive structures.