Lex interviews Demis Hassabis

Demis Hassabis: DeepMind – AI, Superintelligence & the Future of Humanity | Lex Fridman Podcast #299

[ 2/July/22 ]

Brilliant interview (again) – thanks Lex.

For Demis:

For me, the best definition of life is recursive levels of “search”, across “spaces” for the survivable. For a full loaded processor, random search is most efficient. Biology must have approximated that solution, recursively.

Are you familiar with Seth Grant’s work integrated with that of Jeff Hawkins, to the recursive sets of pattern prediction world models reaching consensus – first to deliver perceptions, then to focus consciousness?

A practical problem – deep geothermal power – deep drilling systems. Super-volcanos and flood basalts are real issues – we may as well make use of the energy as we solve them.

Sorry Demis – not behind us. My prime candidate for the Great Filter – an evolved tendency (biases in neural networks) to prefer simple solutions, preventing an acceptance that cooperation is foundational to the emergence and survival of all levels of complexity – and locking on to the simple idea that evolution is all about competition, and unable to see past it. That resulting in competitive systems leading to some level of required constraint destruction, leading to systemic collapse. The hardest jump is making this jump, then creating a sufficiently robust ecosystem of cheat detection and mitigation systems to recursively sustain cooperation.

The recursive theme in biological complexity is this – a context that allows the emergence of sufficiently robust cheat detection and mitigation systems to allow that emergent level of cooperation and complexity to survive and evolve.

On consciousness – it is a deeply complex set of systems, and it seems to have a basis in Jaak Panksepp’s work. That forms the basis, then Hawkins world models, with Grants protein pattern integration/search functions provide the generality in an MCMC analog.

On dogs – no – watch – Chaser the Dog Shows Off Her Smarts to Neil deGrasse Tyson. We can generalise faster and therefore more broadly and deeply over time. Language is a big part of that (huge – for transmission of concepts across space and time – whole planet and many generations – thus broadening the boundaries of search space).

Agree that the step to sapience is a big one, and once you have a language model with recursion built in, it is very likely to bootstrap self aware entities. That is something possible as soon as declarative language is available to a generalised network with a moral model.

Good and evil is a very simple binary distinction, utterly inappropriate to complex intelligence. Something much more is required. Something that values all instances of intelligence, and attempts to optimise a balance between security and freedom, to the degree that an agent displays appropriate levels of responsibility and respect for the levels and instances of diversity present (that are not an actual unreasonable threat); while simultaneously acknowledging multiple classes of fundamental uncertainty and unknowability.

I share all the dreams of Demis.
The reduction of multiple levels of xrisk is an essential part of that critical path.

Cooperation is easiest to stabilise with a shared threat, until either one reaches sufficient awareness to see the fundamental strategic requirement for it for long term survival, or sufficiently robust sets of cheat detection and mitigation systems are in place.

Agree – humble is required – and being a generalist is part of that.

Why are we here? Search happened – and it got recursive in survival space.

What is the true nature of reality? Something like – fundamental balance between order and chaos, allowing “choice” at the boundaries. Life as search for survivable systems!

Again – thanks Lex – Great interview. And thank you Demis – for all you have done.

Posted in Our Future, Technology | Tagged , , , , , | Leave a comment

Nihilism and design

Daniels S posted a link to a new consilience project post.

Technology is Not Values Neutral: Ending the Reign of Nihilistic Design

[ 29/June/22 ]

Kind of – but to me it is a very biased view.

Yes -certainly, in complex systems all things influence all other things, often in ways that were not previously expected.

I find the sentence “But naive design has also brought us to the brink of catastrophic risks due to a principled neglect of concern for possible negative second- and third-order effects, in both physical and psycho-social domains” to be more problem than solution.

Rather than naive design, I put the primary influence on the social dogma present that is no longer fit for purpose.

The naive notion that markets can and will efficiently solve all problems is one of those. Markets are prone to vast set of pathologies, and the idea of marketing, of deliberately creating false impressions to persuade people to buy things they would not otherwise have bought, is one of those pathologies.

The issue is deeply more complex than this article tends to imply.

Certainly, the ways we use technologies have impacts – that is always the case.

A number of things are now clear beyond any shadow of reasonable doubt:

1/ that human beings are complex, more complex than any human awareness can possibly appreciate;

2/ that ancient stories that served our ancestors reasonably well in their relatively low technology and localised existence no longer work. We now have reasonable models of how life evolved, and it had little or nothing to do with any idea of god or gods. At least some of us now understand the foundational role of cooperation and fairness in the emergence and survival of complexity.

So blaming values on technology, when it really needs to be attributed largely to the stories that people accept, and the assumptions implicit within them that most fail to even notice, really does not seem entirely honest to me. It seems likely to be something of a smokescreen for some hidden agenda, unless it is just simple error and ignorance.

A later sentence “The idea is that our values come from churches, schools, and families, and these institutions and social processes then impact how technologies are used” is to me most of the issue. It leaves out choice. It leaves out personal responsibility. It implicitly accepts that human beings are nothing more than the stories they are fed.

That, to me, is the greatest and worst lie.

We are all capable of being much more than that.
We are all capable of asking questions, and making choices. And anyone who does that is going to make mistakes and will end up outside of “social agreement” on some issues. That is part of being a responsible thinking entity.

Accept it.

Do not let anyone take that freedom of choice, and the responsibility that necessarily comes with any real freedom, away.

Sure, churches, schools, families can have influences upon us, and we are each capable of seeing those for what they are and choosing responsibility greater than that.

As to the proclaimed “Technological Orthodoxy”, I don’t know what idiot thought “Humans fully understand the technologies they create”. That is just utter nonsense to anyone with even a passing acquaintance with complexity theory or quantum mechanics. Fundamental uncertainty is a foundational aspect of the modern understanding of complexity.

As for anything being firmly under anyone’s control – take a rally car out, and drive it down a gravel road at 250km/hr, then you will understand the difference between control and influence within constraints. It is influence (the soft form of control- containing uncertainties, necessarily) all the way every level of the complexity stacks.

Sure – technologies are themselves neutral, but the use of any technology always and necessarily has unintended consequences. Part of being responsible in complex systems is always being alert for unintended consequences, and adjusting practices accordingly. Markets do not necessarily promote that sort of responsibility, and it is what is demanded of us if we are to survive.

Technological progress is necessary if we wish to survive, without it we will at some uncertain point in the future share the fate of the dinosaurs. But technological progress does not necessarily require increasing size. It can be very high tech and very distributed, particularly once molecular level manufacturing comes on stream. And that is antithetical to capitalist dogma.

That article is so overly simplistic, that I just find it annoying and wrong at the same time, even if it is better than the things it is critiquing, they are so wrong that I had discarded them before I left my teens (and that was half a century ago).

It just is not helpful.

There are no viable solutions in that region of the solution space.

Something much more is demanded of us all.

Choice.

Responsibility.

Cooperation.

Respect for diversity.

Those values must be at the core of any survivable system. Of that I am confident beyond any shadow of reasonable doubt.

Only with those values firmly in place is it safe to develop the sorts of technologies that we really need for survival.

[followed by]

Hi Zachary,

Some truth in what you say, and it is possible for agents to enquire, to test, to observe, to build models.

There are some great ideas in written form if one can begin to explore the spaces of understanding and strategy and modeling. And that is a deep and broad journey. Many of the best modelling tools are mathematical, and one needs to explore may areas to begin to see the boundary regions and the failure modalities. Modern physics uses some complex mathematical ideas, and for many (like the Schrodinger equation) they can really only be solved in a practical way by making assumptions about low energy states and low degrees of influence and freedom (and such things do often work).

If one does a reasonable scan of the history of understanding, and one has developed a reasonable understanding of strategy and complex systems and quantum mechanics and biochemistry; then one can read something like Plato’s Republic as a warning, a 3rd order abstraction of the dangers of over simplification.

The more one looks at history, at the ideas from various traditions, theological, cultural, philosophical, mathematical, etc then one can start to get a feeling for the sorts of trajectories often present in the complex systems, and one can start to see those sorts of boundary regions where it is possible to generate something that reasonably approximates freedom.

If one spends a bit of time with modeling systems, particularly with MCMC simulations, then one can use that sort of general principle, applied to random search beyond accepted boundaries, to start to build levels of awareness and understanding that are not commonly discussed (if ever).

As a teenager, I was fascinated by Asimov’s Foundation series, and van Vogt’s concept of a Nexial Institute.

Once I discovered that there is an infinite realm of possible logics beyond boolean logic, beyond the mathematics commonly conceived, beyond even Goedel, then I started to see some of the mechanisms that evolution has encoded within us that allow for reasonable approximations to random search, and where boundary regions between order and chaos allow for reasonable approximations to free will.

And the deeper and more abstractly I explored those notions, the clearer it became that any level of freedom, without appropriate levels of responsibility and cooperation, is necessarily destructive of both liberty and security. Those conclusions hold at every level of logic and abstraction I have tested – and I have, on a few occasions, achieved double digit abstractions.

So I am just one person. One high IQ autistic spectrum individual, who has essentially pursued my own interests and paths and patterns of understanding for over 60 years; and I am about as far from consensus as it is possible to get; and I often work hard to achieve some level of consensus with others in the various fora I engage in (where it seems that such things might be usefully approximated).

Since 1974, since completing undergraduate biochemistry and becoming aware that indefinite life extension was possible; searching the space of possible strategic incentive structures for classes of systems and institutions that give potentially long lived individuals a reasonable probability of living a very long time with reasonable degrees of freedom, has been the key background context to my enquiries (and in a sense it started long before that – reading Heinlein in primary school might have been an early influence on my thinking).

I often find it nearly impossible to work out why anyone else does anything, as I have invalidated the vast bulk of assumptions that most people accept without question, and they no longer exist in my understandings (except as historical footnotes).

Posted in Our Future, Philosophy, Politics, understanding | Tagged , , , , , | Leave a comment

A new page on Epistemology and survival

View Post

Posted in Our Future, understanding | Tagged , , , , , | Leave a comment

On Politics

Added a new Page above:
https://tedhowardnz.wordpress.com/political-action/on-politics/

Posted in Our Future, Politics | Tagged , , , | Leave a comment

On narcissism and classification

Daniel posted a link to The 15 types of Narcissism (and their characteristics)

[ 18/June/22 ]

Hi Daniel,

A few years ago a psychologist gave me the label “Autistic” and my wife the label “Neurotypical”. To some extent, those labels have been useful, but they seem to generate almost as many issues as they solve.

We all need systems of understanding and classification, but all such systems become traps, tending to hide the subtlety between classification states, and thus potentially hiding entire realms of abstraction or domains of relatedness.

I could easily see many “neurotypicals” stuffing “autistics” into a narcissistic classification, and never really being willing to question it thereafter.

And the years I spent studying biochemistry, and the 50 years since that I have kept skimming abstracts and occasionally reading papers; and listening to various speakers as I drive places (the nearest big city is 2.2 hours drive away, and some weeks I might go there 2 or 3 times for meetings, and our small farm is 7 hours drive away), all build upon the idea that human beings are more complex than any computational entity can deal with in anything remotely approaching real time. The evidence from neuroscience over the last 5 or so years is that the human brain is capable of searching a space of some 10^50 patterns per second (using intrasynaptic protein complexes as pattern integrators).

So yes, we do have major and minor systems, and yes there are multiple levels of attributes of those systems that can be issues, particularly if they are denied, or they are over simplified; but over simplification, particularly the over-reliance on any system of categorisation, holds at least as many issues as it solves for. {In this sense, the classical notion of rationality is a simplistic trap, even as it can be very powerful in some contexts.}

We all need to be willing to dwell in uncertainty with respect to anything and everything from time to time (as contexts allow), so that we can have at least some finite (however limited) probability of going beyond the systems of understanding and classification that define our current experience of being.

And it can be really hard, when there is so little shared experience with others that communication about those things one finds truly interesting has such low probability that most would think it impossible.

So yes, certainly, be alert for pathologies, most particularly within ourselves, but also within those within our networks; and always be willing to grant that no one is ever one thing all the time – we are all necessarily much more complex than that, and we are all capable of being more (or less) than we normally are.

The Aleksandr Solzhenitsyn quote has been with me a lot in the last few months, as it seems to be that he was very close to truth when he wrote “The line separating good and evil passes not through states, nor between classes, nor between political parties either – but right through every human heart…even within hearts overwhelmed by evil, one small bridgehead of good is retained. And even in the best of all hearts, there remains…an uprooted small corner of evil.”

In as much as good and evil have any reality, they seem to me to be limiting cases of spectra of deeply parallel and dimensional complexity.

I guess my key message is, classification is essential, and over doing it, being over confident about it, creates at least as many issues as it solves.

Posted in Brain Science, Philosophy, understanding | Tagged , , , | Leave a comment

Matthew 26-52

[ 17/June/22 “And Jesus said unto Peter, Put down your sword and and pick up an AR-15, for it has vastly superior firepower.”]

{King James version -}”Then said Jesus unto him, Put up again thy sword into his place: for all they that take the sword shall perish with the sword” reads more to me something like “you need to have your weapons, but they are to be used only in the most dire of needs. Resorting to swords early in any conflict means a lot of people die needlessly”.

And I get it was a joke, and at the same time it is a deep lesson that strategists in every nation, every army and in every economic institution need to understand.

Looked at not from a biblical perspective, but from the perspective of someone interested in maximising both security and freedom from an evolutionary strategic perspective, the message is essentially the same – any form of competition that is not firmly built on a cooperative base is necessarily destructive of both security and liberty. And any form of liberty without appropriate levels of responsibility is similarly necessarily destructive.

Every human is more complex than any human can possibly understand in detail (that is proven beyond any shadow of reasonable doubt), thus we must all, of necessity, make simplifications; but over simplifying anything to do with people necessarily leads to failure.

The only path with any reasonable probability of survival is one where all individuals practice responsibility, to the best of our limited and fallible abilities – and we are all, necessarily, going to make mistakes from time to time. The first necessary step in cleaning up after making a mistake is admitting that a mistake has been made – and that is hard for everyone, and very hard for some. And in order to clean stuff up, you need to be alive.

Posted in Humour, Politics, understanding | Tagged , , , | Leave a comment

Facebook link to Greenpeace – Stop the Feedlots

[ 17/June/22 ]

Feedlots, in and of themselves, are not the problem.

If done well, feedlots can eliminate the problem.
If done badly, they can make the problem worse.

The issue is not feedlots in and of themselves as a concept, it is how the specifics of the particular system actually operate.

The issue of nitrate pollution of groundwater from intensive dairy has two major aspects to it:

One is that cows tend to stand still when they pee, and they drop quite a few liters of urine in one spot on the ground. This is more urine than the plants in that place can rapidly use, so if there is a rain event, then some of that nitrogen can get flushed through the root zone of the soil and enter the groundwater system. Putting cows in feed lots means that we can capture all of that urine (and other waste products – faeces, methane, whatever) and treat them appropriately, before returning them to the pasture system in appropriate concentrations such that they are used effectively by the pasture. There is no guarantee that a feedlot system will do that well, but there is potential for it to be done far more efficiently and effectively than is possible with any form of open pasture grazing system. And there are a lot of ways to do it badly.

The second major class of issues for groundwater (and to a degree surface water runoff) pollution by nitrates is nitrogen forcing production. One of the many influences on plant growth is the concentration of available nitrogen near the plant roots. If this is high, then it tends to promote plant growth. The issue with running high nitrogen levels throughout the soil profile where the plant roots are is that any excess of water in the soil system will tend to push some of that nitrogen below the lowest of the roots and into groundwater. One of the issues with soil is that it is never a perfectly homogenous thing, it is variable at every scale you look at it. So that it doesn’t matter how perfectly you try to apply water to a soil profile, there will always be places where it flows through faster than you want it to, and other places where it goes too slow (so some areas deep in the soil will be too dry, and others too wet as against the optimum you are trying to achieve – if in fact the operator is trying to achieve the optimum for minimum environmental impact, as against simply going for maximum production at minimum cost). Perversely, attempts to “optimise” water use, tend to make this aspect of the problem much worse, as when such flushing to groundwater events happen, they tend to be small and at high concentration, rather than large and at lower concentration. Lowering the amount of free nitrogen in the soil does reduce the the total protein production of the pasture. However it is done, there must be an acceptance of some degree of reduced productivity or increased water use (by injection of flushing flows below the root zone on an as required basis), if groundwater nitrate levels are to be kept low (and by low I mean below 1, not below 7).

So the “problem space” of humans and ecosystems coexisting is deeply complex, and if feed lots are done well, then they can be part of an effective solution to the problem of ground water pollution, and it is also true that doing feedlots well is not simple, and there are many more issues of animal welfare that need to be effectively addressed than simply looking at groundwater pollution.

So no – I cannot be against feedlots as a concept.
Going backwards to low productivity technologies actually requires more land area, and thus in the big picture leaves less opportunity for natural systems – if we are to feed the people we have.

We need high tech, and high tech by itself can just mean bigger problems if it is not done in ways that do actually manage all of the potential issues effectively. Left simply to market incentives, then yes, feedlots create massive environmental issues if they are being simply optimised for maximum output at minimal cost.

If however, feedlots are used with a lot of very high technology that is constantly monitoring and adjusting and optimising for maximal animal welfare, minimal environmental impact and producing a profit, then they can be orders of magnitude less impact than any form of pasture production, simply because the outputs of the animals can be contained and processed appropriately.

Feedlots done badly – big problem.

Feedlots done well – near optimal solution possible.

Posted in Climate change, Ideas, Nature, Our Future, Technology, understanding | Tagged , , , , , , | Leave a comment

The idea of Good and Bad people is too simple.

Good and Bad People

[ 16/June/22 ]

I was listening to Rory McIlroy’s interview over the LIV golf, and there was one thing he said that I think was far more problem than solution; and that was when he said “there are good and bad people everywhere”.

To me, that is a dangerous over simplification.

It seems clear to me that every person does some things that appear clearly “bad”, and others that seem clearly “good”, and for most of us most of the time we are really not certain, we are just making our best guess.

I’ve mentioned the Solzhenitsyn quote a few times lately – about the line between good and evil going through the heart of everyone. Anything less than that awareness is a dangerous and inappropriate simplification.

Posted in Philosophy, understanding | Tagged , , | Leave a comment

Lex interviewing Robin Hanson

Robin Hanson: Alien Civilizations, UFOs, and the Future of Humanity | Lex Fridman Podcast #292

[ 14/June/22 ]

I’ve had my share of arguments with Robin over the years, and I give him full respect for his knowledge of economics; but his knowledge of biology and particularly the strategic underpinnings of evolution is woefully inadequate and just simply wrong on multiple levels.

Life is complex – really complex.

Human life is the most complex and the most cooperative life on this planet (which also happens to mean, “that we know of at this time”).

While there are certainly many aspects of competition that are eternally part of any evolutionary system, in terms of the emergence and survival of new levels of complexity; it is true to say that their emergence is empowered by cooperation and their long term survival is predicated on the long term maintenance of that cooperation, and that demands an evolving ecosystem of cheat detection and mitigation strategies.

And in my understandings, one of the most powerful characterisations of life, is “search” for survivability across the space of possible systems and possible contexts. That leads to the most general case of life possible which is systems capable of real time adaptation to changes in contexts and recursive search through systemic and logical spaces for novel solutions to identified problems, and novel opportunities. And when one delves into the theory of search, the most efficient search possible for a fully loaded processor is the fully random search (which does lead to an interesting set of conjectures about how neural networks as necessarily biased as human neural networks may approximate random search in different classes of contexts, and how evolution may have embodied such things into our neurochemistry).

Unusually, I find myself getting really annoyed and frustrated by the multiple instances of over simplification of the truly complex leading to entirely inappropriate conclusions — at least in the first 80 minutes of the interview.

The latter part of the interview I find Robin is often at his superb best – but he still over simplifies the constraints required to get long term survivable outcomes from markets (particularly betting markets).

Agree with Robin that most ideas become more obvious over time, as information accumulates. So agree that Einstein deserves some celebration for doing it first, and others would have done it later if he had not. That is clearly evident when one views life as “search” (eternally).

Agree completely that the lesson of AI is the view-quake that perception is hard!!!

But the thing from biochemistry is that life is complex – deeply complex, and subtle.

The chunkyness of AI is defined by the biochemistry of the computational systems of brains. I developed one solution in 1974, but some things are too dangerous to release.

Around 3:17:20 Robin speaks about emulating the power of the cells of the brain – that is an inadequate model. What one needs to do is emulate the computational systems of the brain. Some of those are at the cellular level, some are at the synaptic level, and some are at the level of the protein structures within the synapse. Computation occurs at all of these levels (and at others, within the body, and the various “organs”). We are the embodied whole of that. Getting some feel for the computation possible in the quantum aspect of protein structures is fundamental to getting a reasonable handle on just how complex we are. I started from biochemistry in 1973, and the conceptual sets available from biochemistry have increased substantially in the intervening years. Search across the space of pattern through time (at scales from millisecond to 500ms) is where much of the action happens in human brains, and it is at the molecular level – and it is both subtle and powerful – and the search space coverable is vast – of the order of 10^50 patterns per second. And what we get to notice is the differences between expectation and delivery (at least at some scales, in some contexts – and the vast bulk of it is subconscious, necessarily).

3:24:20 The power of markets in complex spaces; yes, provided certain conditions are met. If agents do not have reasonably equivalent tokens of value to engage in markets with, then what markets solve for gets skewed towards the most tokens, and that tends towards a leverage spiral, and can lead to systemic failure.
At the larger scale, the scale of survival on the very long term, there are existential level risks created by the short term heuristics embodied in our neural networks, that are no longer appropriate to the scale of complexity embodied in our systems.
That is not getting sufficient attention.
No market can overcome that inherent bias, in and of its own internal incentive structures.

And yes – Markets are very powerful in some contexts, and deliver existential risks in others, and we are in one of those transition zones. This is deeply NOT simple!!!

The definition of rational is deeply complex.

Aumann’s agreement theorem is predicated on shared priors.

When individuals have been using random search across vast search spaces, then there can be very little that is in the class “shared priors” – so in a sense, the very concept of “rationality” fails – as there is no step wise “cause and effect” linkage. What there is are jumps to concept sets that do manage to pass enough tests to be worth keeping in the toolkit, and there is ongoing search. One can imagine that there must be a stepwise “rational path”, but one does [not] have the time to search for it – too much else needs doing.

Totally agree with Robin that we do need to actually try stuff. We need multiple instances of some version of “safe to fail” experimentation – all institutions, all levels, all systems.

And this is the antithesis of any level of hegemony.

Completely align with Robin’s advice on life course.

Total agree with Robin, about at every moment having the option of keeping going, and would add that I would like to tend to experience increasing function, and increasing resilience with time, rather than those reducing as I currently experience.

The final argument of Robin’s about competition is straight out of the heart of economics, and makes sense in that context, but it fails in the wider and much deeper context of biology and the systemic strategic constructs of the the evolution of complexity.

In that context, yes, there must eternally be competitive aspects, and any level of competition that is not firmly based in cooperation, necessarily self terminates.

This is my prime candidate for the “Great Filter”.

The evolutionary pressure to select and bias for simplicity is entirely predictable, and if not seen for what it is, does prevent the possibility of even experiencing the levels of complexity present in life. It is like a recursive form of confirmation bias deeply embedded in our neural networks.

Competition, without a cooperative base, self terminates – necessarily, in every class of logic I have explored.

Tunicates give us ample evidence that brains are there primarily for navigation.

Embodiment is an essential aspect of human cognition, and intelligence.

When one views life as recursive levels of search across strategic spaces for survivable systems, it should not be a surprise that most systems fail. The number of systems that are not survivable is vastly greater than the subset that is. It is somewhat analogous the Wolfram’s ruliad, yet different.

Posted in economics, Our Future, Technology, understanding | Tagged , , , , , | Leave a comment

Dangers of over simplification

[ 14/June/22 ]

Agree.

It seems clear to me that Aleksandr Solzhenitsyn was very close to truth when he wrote “The line separating good and evil passes not through states, nor between classes, nor between political parties either – but right through every human heart…even within hearts overwhelmed by evil, one small bridgehead of good is retained. And even in the best of all hearts, there remains…an uprooted small corner of evil.”

We are all far more capable than we would like to admit of doing things we would not like to admit to.

There is no justice in the genetic lotteries of our conception, and little more in the early development of our brains. Precious little more in the societies of our birth and childhood, and even now in our social order.

And the fact that we are all here, are all alive, and can use this technology to communicate, is hard evidence of the fundamental cooperative nature of all human beings, and we can all compete if the context demands it of us. Yet it is cooperation, not competition, that is the fundamental glue that makes social interaction possible.

Our social systems are far from perfect.

Every human being is far from perfect.

It is our tendency to over simplify that which is actually truly complex that leads us to simple classifications like good and evil. We are all vastly more complex and nuanced than that.

We need social and economic systems that embody justice, not ones that perpetuate injustice. And that is a deeply complex subject for which there are no simple answers possible. Any workable answer has to accept fundamental uncertainty, fundamental respect for diversity, and an eternal need for conversations and change. Nothing less is survivable, long term.

Posted in understanding | Tagged , | Leave a comment