It should be evident from this series of posts, that I take consciousness as emergent from the way that the neural system works. It is not enough to simple say “emergence” and treat it as if magical. I do not consider it at all magical. Rather, I see it as consistent with the principles that I outlined in an earlier post, “A semantic conception of mind.”
My view is that the way the brain works is simple in principle, but complex in detail. So I see it as pretty much certain that consciousness would evolve, though the kind of consciousness that emerges might not be identical to human consciousness. So I see all mammals as being conscious, with perhaps their consciousness being somewhat similar to ours, though lacking the enrichment that language gives us. Other complex creatures such as an octopus or a bee are surely conscious in some way or another, but it is a little hard for us to imagine how they would experience that consciousness.
So why is there a “hard problem” of consciousness? This is because people are looking at it in the wrong way. They are trying to understand how to design consciousness, instead of trying to understand how it would evolve. To me, it seems very unlikely that a designed robotic system could ever lead to consciousness. I expect our designed robots to all be zombies.
This brief posts completes my series on consciousness. I will continue to post on other topics, such as knowledge and perception, that are related to consciousness. I realize that many will find my series unsatisfactory, in that it failed to explain to them what they wanted explained. Philosophy seems to be dominated by a kind of design thinking, and an explanation of consciousness does not fit with design thinking.