The Hard Problem of Consciousness revisitied

Why The Hard Problem Of Consciousness Won’t Go Away

This attempt at communication is most unlikely to succeed, and I will give it my best shot.

In my understanding – the qualia of experience are accounted for, as the interaction of a software entity (our self awareness) with a software model of reality (what our subconscious brains create from past and present experience). The mistake that many people make is assuming that we experience reality directly. The evidence is now beyond any shadow of reasonable doubt that we have no direct access to reality, all experience is mediated through a brain created model.

The other key idea to get is somewhat more difficult, and that is getting that reality is not causal, it only approximates causality (to many decimal digits in most situations).
It seems clear that the best evidence we have indicates that at the quantum level of the very very small, existence is stochastic, but within constraints. There are probability distributions, not hard causal certainty.
Summed over vast numbers, these distributions result in very close approximations to hard causality at the scale of normal human perception. Certainly close enough to build jet engines, supercomputers, sky-scrapers, and all the technology we see in existence.
And none of those things prove causality, they only demonstrate the degree of approximation of causality.

The shortest time period a human being can experience is about a hundredth of a second.
The smallest thing we can see with our naked eyes as a tiny indistinct dot against a background of a different colour, contains 10^17 atoms.
The sub atomic particles (if the word particle has any real meaning) or perhaps better described as the smallest entities of existence we currently have evidence for, can experience about 10^40 of their smallest time units in the shortest time a human can experience. Given the huge numbers involved, it is no surprise that humans experience something very close to hard causality most of the time. Those probability functions get very well populated by numbers like that, and while any single event might be random, and collection of 10^57 events forms a very predictable pattern.

So it seems that we live in a universe that is constrained randomness, and it delivers something very close to hard causality at the level of entities like ourselves.

It does appear, that in such a universe, that is stochastic (random) within probability distributions, that real freedom can exist.

I wrote an explanation of how qualia come to be, and what they are in a generalised sense about two and a half years ago:
https://tedhowardnz.wordpress.com/2013/07/28/nature-of-consciousness/

If you want a more detailed explanation, follow that link.

And I need to be explicit, that I do not deal in truth.
It seems clear to me that the best I can hope for is a useful approximation to something. The idea that someone can model something accurately, and be 100% confident that their model is 100% accurate, is to me a nonsense. We may in fact at times model something accurately, and we could never be 100% certain of that. We might suspect, and if we are honest, there must always remain an element of doubt.

Having spent over 50 years studying living systems, and the matter from which they are made, I have some beginnings of an idea of just how complex even the simplest living cell is, and we are a vast collection of very complex cells that can and do take on many different forms and many different functions.

So consciousness takes a bit of work to understand, in broad brush terms, and it isn’t that difficult, in the same sense that understanding a modern computer isn’t that difficult, it is just a collection of seven basic circuits, mostly in repeated groups, with minor variations on a theme, and some of those variations are very significant, and involve new levels of understanding and operation. We are vastly more complex than that.

“ex nihilo nihil fit” is based on a set of assumptions that appear to not always work at all scales in this reality in which we find ourselves.
The essence of enquiry is in questioning.

About Ted Howard NZ

Seems like I might be a cancer survivor. Thinking about the systemic incentives within the world we find ourselves in, and how we might adjust them to provide an environment that supports everyone (no exceptions) - see www.tedhowardnz.com/money
This entry was posted in Nature and tagged , , , . Bookmark the permalink.

8 Responses to The Hard Problem of Consciousness revisitied

  1. rgbuzz says:

    This post addresses the easy problems of consciousness. What it doesn’t address is why there should be experience at all, why I shouldn’t be “mentally dark”, why there should be something that it is like to be me. Conscious behavior may admit of a materialist description and explanation, but consciousness – that is, experience – cannot be reduced to certain neuron firings as a computer’s software can be reduced to its hardware.

    Like

    • Actually, it does seem to me that this post does actually address the problem of experiential consciousness.
      And it is complex.
      It is multi levelled.
      It requires cells – which are really complex.
      It requires those cells to be connected into networks, which are really complex – between 20 and 50 major sets of networks interacting (and millions of minor sets).
      It requires chemical modulation of the structure and relationships of those networks, for which we have identified about 60 modulators thus far, and I suspect there will be a similar number of more subtle systems yet to be detected.
      And that is just at the hardware level.

      Then there is all the learning that goes on, to produce the distinctions that produce the model that becomes the experiential reality of the software entity that is us – which again is not a singular thing, but a complex set of interacting things that produce a singular set of memories.

      I am clear, beyond any shadow of reasonable doubt, that all my experience is of a software model of reality (slightly predictive – around 200ms usually, and usually very accurate, and sometimes not).

      So just as when one is playing a full immersion Virtual Reality game, or deeply engaged in a movie, one is in that reality (a model of a model in this sense).

      I have experienced very many interruptions to experience, and modification of experience.

      When you go under general anaesthetic, one instant you are in the prep room, counting, then the next instant you are groggy in recovery, and some minutes or hours have past – 6 and a half hours in the longest one of my experiences.

      When training for deep diving, and very long breath holding (over 7 minutes), I got used to many experiences of very low levels of consciousness, including a very restricted consciousness that could only count, and detect temperature on my face – having lost vision, and all motor and proprioceptor function. A few years experience of that sort of consciousness, coming on the back of undergraduate study of biochemistry and neurophysiology in the early 70s, taught me a lot about the nature of consciousness. I did a lot of work on myself with early video-cameras, noting the very subtle difference between my experience and what the camera recorded.

      You can’t reduce the VR of an MMORPG to a single gate switching, it is the result of billions of gates switching on many machines over time, and it is what it is.
      We are like that in a sense.
      Of course you can’t reduce consciousness to a single neuron.
      Neurons are complex cells, with very complex function and behaviour and relationships to other cells.
      Neurons connect into wider networks, and are influenced by that.
      Networks connect into more abstract networks.
      Patterns on patterns (software) develop in those networks.
      Software interacts with software in a multidimensional structure that is really complex.

      And that does seem, beyond any shadow of reasonable doubt, to be what this experience of being human that I have is – software on software, running on very complex squishy hardware.

      50 years of interest and investigation into biochemistry and neurophysiology, along with 40 years experience as a software developer (all levels, hardware through op codes, to writing languages and operating systems, to developing programs for users – 30 years MD of a software company) give me a practical set of experiences that allow for a set of understandings, and a depth of abstractions that are not at all common.

      And there is this thing about abstractions, one requires experience in a domain to make abstractions from that experience, repeat recursively. There simply is no substitute for experience in this sense, and it takes time and intentionality of exposure to a vast array of domains of knowledge and experience, to enable higher order abstractions.

      So I don’t expect many others to have to sort of understanding I have.
      And I do in fact have the understanding I do.
      And it is very hard to communicate.

      It is impossible to communicate any level of abstraction.
      All one can do is expose another to sorts of fields of experience, and indicate the sort of relationship one is looking at, but the actual act of abstraction, of forming that relationship internally, is an intuitive and personal thing.
      That is tricky enough with first order abstractions.
      By the time one gets to second and third order abstractions, communication is marginal at best.
      Beyond three very doubtful at the best of times.
      Beyond 8, infinitesimal probabilities.

      Like

      • rgbuzz says:

        Your explanation still doesn’t address the problem. Its complexes and complexes, models and models, and then *poof* that’s just what conscious experience is.

        I’m not saying that experience does not depend on neurophysiological facts, but you say that experience just comes out of neurophysiological facts without explaining how. This is the hard problem, and the problem for reductive materialism.

        Here’s the claim that needs further argument: “Networks connect into more abstract networks.
        Patterns on patterns (software) develop in those networks.
        Software interacts with software in a multidimensional structure that is really complex.” Presumably those patterns are patterns physical firings. So various patterns interact with other patterns in complex ways. But this is not identical with experience.

        Like

      • Consider playing a computer game.
        What is actually present is little dots of different colours on a screen, that are on or off or at some intermediate intensity.
        Yet it is very rare for a human being to look at a computer screen, and see little coloured dots.
        What we see, is words or pictures, or people or animals etc.
        We see distinctions, patterns at various levels of abstraction.
        Our minds, our experience, fills in the gaps.

        I have no problem with the idea that being human is a software pattern, taking input from a model of reality, running at a frame rate of about 200 cycles per second maximum (usually much slower – most people operate around 11 cycles per second most of the time, it takes a great deal of practice to reach 200Hz consciousness consistently).

        So it seems that we are this sequence of discrete state systems, at some frame rate, with all the millions of simultaneous inputs we have, giving the experience we have. All software.

        It just feels like it does, that is the personal thing about experience, all we can ever have is the experience we have. We can never have another’s experience (we may get close, and that will never quite be the same thing).

        And I did say it was complex.
        I did say it involves 8th level abstractions, and I have not yet found a way to be explicit about 3rd level abstractions, let alone anything beyond that.
        Communication about such things is very difficult.

        All I can give in a sense is the hint that a path exists.
        It took me many years (decades) of study and experience to get what I have gotten.

        I know of no possible way to shortcut that.
        Sorry.

        Like

      • rgbuzz says:

        The dots that are present in different colors are a physical phenomenon. We look at a screen and interpret the dots in such a way that it represents something. But it should be noted that it doesn’t have to be seen that way. If someone wanted to see the dots just as dots and not as represent some familiar image, they could and without much struggle. Our minds have something to do with what concepts we can apply to the dots on the screen, but this is not to talk about the hard-problem.

        “Taking input from a model of reality”. Maybe I’m being nitpicky, or the point wasn’t clear, but this is a curious characterization. It seems that the input is going to be physical information from an external reality. The input is plugged into a model of reality. The output is going to be something like our mental representations. Is this more the picture you’re going for?

        So you say we are physical discrete state systems. Very complex ones. Fine. But this is the Achilles’ Heel of materialism. Because its just physical states on physical states until some arbitrary level of complexity is reached, and then suddenly you have something that’s not even really physical at all, but phenomenal. You get appearances, a world that shows up. But your software analogy doesn’t begin to capture this kind of process, for software can still be easily understood in terms of hardware. But what demands explanation is why and how something of ostensibly one distinct ontological kind should somehow emerge from another. Just saying that it arises is simply not enough. But you should be able to characterize an adequate, explanatory reduction (at least give the right flavor), if you are going to maintain this kind of thesis.

        Like

      • Hi RGB,

        I get this is really difficult.

        Perhaps the key assumption is encapsulated in your phrase “is why and how something of ostensibly one distinct ontological kind should somehow emerge from another”. Consider that the assumption that they are of a different ontological kinds may be of an exactly analogous type to the naive realism of someone seeing the sun and stars revolving around the earth. Yes it does seem to be that way, and no- when you look closely at it, with appropriate tools, and from an appropriate context, it is not that way.

        To me, it is clear, that the experience of being a software entity existent in a software model of reality is exactly this sort of experience I have.

        And it is really complex.
        We are not a singular system.
        We are the totality of many different instances of many different classes of systems, emotional systems, rational systems, recognition systems, predictor systems. Systems capable of indefinite recursion.

        So we can abstract and model things, and we have to exist in a reality that demands that we feed and exercise our bodies – so that rather limits the amount of time we can devote to the uninterrupted exploration of any particular model or simulation we might run to about 70 hours – beyond that we have to stop and deal to bodily function.

        So we run into another inverted version of the halting problem (which at the same time is a practical solution to the original halting problem), and practical limits on the sorts of complexity we can explore, and we can imagine ways of extending that capability in the future, offloading particular classes of exploration to silicon based systems far more efficiently than we do currently.

        I don’t know how much coding you have done.
        “C” as a language allows pointers to data structures or programs (much like the treasure hunts we did as kids, where each location had a cryptic pointer to the location of the next clue).
        The language I use most often (xBase++) allows arrays of any dimension and any datatype (including objects and codeblocks) – so not only can one pass data to subsystems, but one can also pass contextually relevant code (which can be created on the fly). The language allows mix and match between procedural and Object Oriented programming styles. I like it, it fits my eclectic style of being. Our neocortex seems to have similar sorts of capabilities.

        When one is used to programming in environments that allow self adaptive sets of data and code to be passed between systems, many of the hard distinctions that seem so obvious in philosophy classes lose their coherence. One has a change of context, a change of perspective, as to how such things may be arranged.

        The AI community now understands the n-dimensional shape of the topology required for infinite extensibility of neural networks. Biology cracked that particular problem in the structure of the human brain several million years ago. We finally are starting to acquire sufficient data and conceptual systems to start to push that biological capability to somewhere near its limits (not that it is limited in the sorts of systems it can explore, only in the speed with which it can explore them). And any non-trivial problem requires adopting heuristics to get sufficient speed to deliver useful outcomes in real time. It simply isn’t possible to do first principles simulations all the way out, all the time – not enough processing capacity in the universe for that.

        So I can understand and empathise with your objection.
        I can see how it does seem that I am just making a jump, and in a very real sense I am, and it is a jump that has many pointers behind it, many proven practical heuristics, and many tens of thousands of hours of very abstract testing and contemplation. For me, it isn’t the blind leap in the dark that it must seem to you, for me it is more like a hop from one boulder to another traversing a riverbed.

        Like

      • rgbuzz says:

        I do not think that the assumption that there are different ontological kinds is analogous to the naive realism of someone seeing the stars revolving around the earth. The distinction is deeper than that. I see the sun and stars revolve around the earth, and question whether those bodies are the same ontological kind as the bodies I encounter on earth. It looks sorta like “I wonder if this object of my perception is the same kind of thing as the ordinary objects of my perception”. In contrast, the assumption that there are different ontological kinds is not analogous. I see objects of my perception, and I also notice that I am confronted by something in immediate experience (I will remain neutral on what that “something” is, but I should note that I do not believe in qualia). So this looks like “I wonder if the objects of my perception is the same kind of thing as my perception/experience”. The analogy breaks down, then, because perception/experience is not an object of perception

        I’m not completely unfamiliar with physicalism; I’ve explored some of the literature. This morning, I finished reading Hofstadter’s Godel, Escher, Bach. I’m not sure if you’ve read it, but it does emphasize, as do you, that many hard distinctions lose their coherence when self-adaptive sets of data and code are passed between systems. And I will not deny this. But I do not think that the hard distinction between the physical and phenomenal will be one of those distinctions which loses its coherence.

        Here’s why I disagree with the jump. You say it has many pointers behind it, many practical heuristics, and many tens of thousands of hours of very abstract testing and contemplation. But this doesn’t seem right. I think that AI research is very good at figuring out laws and mechanisms governing awareness and intelligence. I think that the layering of systems of different levels of abstraction provides a good explanation of intelligence/reasoning, how we become aware of objects, etc. This is what I think has “many pointers behind it”. But that conscious perception – experience – is reducible to just physical processes, doesn’t have “many pointers behind it”. The evidence supports the claim that intelligence is reducible to physical processes. The evidence does not support the claim that conscious perception is reducible to physical processes.

        I don’t mean to come across as rude. I do appreciate your willingness to discuss this with me, as it is a fascinating topic.

        Like

      • Hi RGB,

        You don’t come across at all as rude. I love the willingness to hold and challenge a view that you display. I very much enjoy such exchanges.

        Yes – I read GEB about 25 years ago, exchanged few emails with Doug, around then.

        I do not deny the distinction between the physical and the phenomenal.

        What I am becoming very confident about, is that all human experience is of software, of models very often many layers abstracted from the physical that the model is based upon. So the phenomenal is a software on software thing (and all software requires a physical hardware). And it is complex. We have almost as many neurons in our gut as we do in our brains – contrary to what most people think.

        So the naive aspect I am pointing to is related to the idea that what we normally experience is somehow reality – where as the neurophysiology is quite clear, it isn’t. What we experience is a software model of reality constructed from a set of FM signals from our senses modulated through the distinction sets available in our deep neural networks and the memory sets we have available and the associated sets of contexts (physical, social, emotional, value, …..). That seems to be the context of being human.

        The emotional and the rational are almost completely separate systems. If you take out the emotional (while leaving the rational) then one is left with the deep experience of there being something essentially fake about the situation (the expected emotional signals are not present). Taking that the other way also has some strange experiences.
        Take both sets out with anaesthetics and there is no experience (however much neural activity the monitors detect – not sufficient coherence for the high level function of experience {which is quite distinct from removing the memory of such experiences, which is also quite possible with selective drug use}).

        So I am not saying that there is no difference between reality and experience.
        And I am saying that we never get to experience reality directly – we only ever get Plato’s shadows (except we now know a lot more about the nature of both the reality, and the way in which the shadows are created, and they are still very much shadows – models in software, not reality itself).

        Once one can understand that “conscious perception” is only ever of a subconsciously generated software model, and not ever of reality itself then the evidence does very much “support the claim that conscious perception is reducible to physical processes” – in as much as software is always a function of physical processes, and always adds something to the nature of those processes – there is a two way feedback – hardware allows software, software influences hardware – causality, in as much as it exists, is bidirectional and has stochastic aspects.

        Like

Comment and critique welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s