Why is an ideal free distribution of a species rarely (if ever) observed in nature?

Why is an ideal free distribution of a species rarely (if ever) observed in nature?

[ 23/April/21 ]

How would one ever define a thing like “ideal free distribution”?

The idea is predicated on a fixed fitness landscape.

The whole notion of such a thing seems to be counter to the very idea of evolution.

Looked at from a systemic basis, evolution seems to be a system of semi-random search across the space of possible systems for variants that are able to survive the vast array of changes that constantly happen with various frequencies.

What sort of changes?

Environmental – weather, climate, weathering of rocks, floods, droughts, solar storms, volcanism, earthquakes, ice ages, sea level changes, landslides, comet and meteor strike, Milankovitch cycles……

Then there are all the other biological agents – also doing their version of semi random search for survivable strategies, viruses, bacteria, parasites, food organism, predatory organisms, …….

The state of nature is one of constant change, at various rates and with various frequencies. Genetic systems have to have sufficient variability to have some instances survive all of those things. When you do the math on that, the various spectra of variation, it is mind numbingly complex. To call it a “fitness landscape” is definitely a misnomer, so while it is certainly multidimensional, it is more like a “fitness ocean storm” in terms of the rate of change of relationship of various dimensions of “fitness” – the “landscape” (ocean-scape) itself is constantly varying.

It is basically our relatively short lives, and generally poor memories, and even shorter attention spans, that give most people the idea that there is any sort of normalcy at all. There really isn’t much at all. Mostly what we think of as “normal” is the result of deep biases in our neural nets to simplify the things that we observe.

Every species is a set of systems, of various classes, doing random search across various sets of tuning parameters. Those that survive have a chance to contribute to the “next round of the game”.

That is life.

At least it was, until we came along, with our ability to form abstract models of abstract systems, and to use those to select preferred possibilities from among the candidate possible systems and parameters that we can conceive of – at whatever level we are able to do that. That adds entire new levels to the speed with which we can search levels and classes of systemic spaces for survivable possibilities.

Most of us suffer from having most of our valence (preference) systems having been tuned by genetics to patterns from our deep past – that isn’t so good in the presence of exponential novelty.

Ours is an entirely new game in a sense, even as it shares many systemic parallels with the old genetic game-space – at least in the lower levels of the systems; and to a degree at all levels – if one is capable of sufficient levels of abstraction. And of course, as all things must be, it is built on the platform of the old sets of game spaces, with all the necessary complexities in those.

One key to understanding the evolution of complexity is understanding that all new levels of complexity are built upon, and predicated upon, new levels of cooperation. And raw cooperation is always vulnerable to exploitation, and thus requires ecosystems of cheat detection and mitigation systems, like our legal and ethical systems, and beyond. Deeply complex and eternally varying.

So the idea of an “ideal free distribution” is not a useful one.

What one needs to look at is the reliability and utility of mechanisms of search.

[followed by David Smith – … break it down to simple terms …]

But that is the problem – people looking for simplicity in places that the evidence is beyond any reasonable doubt that it does not exist!

When dealing with evolution, one has to get real about the complexity present, which is profound.

Even the simplest of living things, a single RNA strand in a very specific environment, is complex in ways that takes years to begin to build a reasonable approximation of.

The simplest of cells is beyond the ability of any human mind to deal with in detail, it is in fact that complex. We can (and must) build useful approximations to the major classes of systems present, and we need to also be very aware that the complexity present is always far greater than our models, necessarily.

So yes, we need to start with simple models, and we also need to be very explicit that they are simple models, and that the real thing is always going to behave in ways that the model does not predict. And in some sets of context the model will give useful outputs (that is the definition of a useful model). And we always need to be aware that all models have limits of utility (another level of sets of constraints – almost always probability based).

The very idea of equations balancing is an assumption about the nature of reality that does not seem to be supported by the data. Reality is constantly changing in every significant metric. Every major metric varies across all dimensions of ranges with some sets of probabilities that are often influenced by other factors that are varying with other sets of probabilities, …….

I love math.

I love models.

I love systems.

The source code for the single biggest computer system that I have written and still maintain is larger than the bible (and the code in it is over 95% out of my head, via my fingers, along with all the manuals and documentation). I have written hundreds of systems, a language, lots of stuff. I trained as a biochemist. I have an autistic’s abilities with math and systems. I used to expect 100% correct in math tests (and to be first finished). I got 100% in my test for certification as a fisheries officer, and 100% in my navigation paper for my pilots license. My brain does simple math instantly and complex math far faster than most. And I have essentially been working alone, reading the work of others, critiquing it, and re-assembling the concept sets, for over 50 years; from quantum physics to relativity to cosmology to geology to biochemistry to modeling theory to multidimensional topologies to complexity theory to strategy to ecology. In my head it is all systems and abstractions, and all of the mathematical models seem to me to be “useful approximations in some contexts” to whatever reality actually is. If Garret Lissi’s conjectures are somewhere near accurate, then it is complex at a level that no real computational entity can deal with except through contextually useful approximations.

That is where I tend to get a bit insistent.

We need to have enough humility and reality to start with the explicit statement that it seems very probable that all models are only contextually useful approximations to whatever it is that reality actually is.

We all need to be alert for any indications that we have exceeded the boundaries of the context in which a previously reliable model produces reliable results – always!

If we start with that.

If we are explicitly clear about those limitations, then yes, certainly, teach students how to balance equations as useful sets of tools in many contexts.

But nothing more than that.

We need to avoid giving anyone the idea that any set of models is ever 100% effective. Reality is demonstrably far more complex and fundamentally uncertain and unknowable than that. Such hubris delivers existential level risk if carried into adulthood.

And for complexity such as us to exist, it is logically necessary that some sets of systems and patterns can be very reliable indeed in some sets of contexts. That too is a given. The really interesting bits are starting to get reasonable models of where those boundaries actually are and how they vary with context.

If one approaches it in that fashion, then it becomes abundantly clear to everyone that they need to be personally responsible for the detection and maintenance of those necessary sets of boundaries (ie keeping their choices within survivable limits), at every level of structure and understanding, and awareness, to the best of our limited and fallible abilities, because our very existence depends upon them.

Without everyone being fully conscious (at every level) of the necessity for such responsibility, then we are all at risk.

Without everyone being conscious that all decisions necessarily contain uncertainty, then over confidence leads to systemic collapse via over simplification.

For me, that is the most fundamental of ideas to come out of ecology – at every level. The idea of survivability within constantly varying sets and levels of constraints, and the necessity for degrees of humility in making such assessments.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Ideas, Nature, understanding and tagged , , . Bookmark the permalink.

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s