Agree with almost all you wrote in http://nonsite.org/article/responses-to-two-problems-with-a-neuroaesthetic-theory-of-interpretation.
One issue you didn’t raise, which falsifies the whole thought experiment in a sense:
Have you ever considered how improbable is any non-intentional natural process assembling even one word, let alone a stanza. By random chance a single sentence is unlikely to occur anywhere in the observable universe, ever. (As Dan Dennett points out with his Library of Babel intuition pump – Chapter 48 of Intuition pumps and other tools for thinking, the space of possible variations is vast.)
So taking the intentional stance with any set of words is entirely appropriate, even if one ends up choosing an intention something like – “some kid playing with a computer to create a program to generate nonsense verse with variable degrees of nonsense”.
And yes – certainly, interpretation is always personal, and shared interpretation of anything non-trivial is problematic, particularly where higher orders of abstraction are involved – evidence two examples from Richard Dawkins “Brief Candle in the Dark” – page 402-403 Gould fails to move from metaphor to abstraction, and page 428-429 his own use of the term “purloined comestible” in reference to Genisis 2:17 “But of the tree of the knowledge of good and evil, thou shalt not eat of it: for in the day that thou eatest thereof thou shalt surely die” which couldn’t be more obviously metaphorical.
So Richard proves himself to be every bit as human as Stephen – in committing exactly the same error, but in a different context, within a few pages of each other.
It is worth contemplating the issue of the evolutionary drive to be conservative with information processing (as per Andy Clark – Surfing uncertainty: Prediction, Action, and the Embodied mind), so at every level within cognition there is a tendency to generate expected states, compare them to actual states, and only pass to the next level any differences.
Thus it seems very likely that we each get to live in our own experiential realities defined by the expectation functions that we have encountered and have proven reliable in our particular life histories, and those models only get updated by things that are sufficiently different from the expected to be noticed as such at some level of the processing systems that are the human brain.
Thus communication of second or higher level abstractions is extremely problematic, particularly when such abstractions involve multiple domains (as any higher level abstraction must, by definition) – again as evidenced by the Dawkins example above, and I have the highest possible respect for Richard, and he is not above making errors.
And with these caveats, enjoyed your writing – Thank you.
[No response at 3 weeks]