What is knowledge?

by Neil Rickert

I have made no secret of my disdain for the idea that knowledge is justified true belief, as is often asserted in the literature of epistemology.  In this post, I want to say more about my own view of what constitutes knowledge.

I recently posted a parable, “The blind man and the cave” in to illustrate what is required in order to have knowledge.  To my surprise, one of the comments dismissed everything that I thought important in that parable, and insisted that knowledge is just facts.

All the blind man needs to know is WHAT he is measuring (a fact), and then know the measurement (a fact). Then the facts that he gains (height of the cave) will be the newly acquired knowledge because he understands the facts based on previous facts learned.

That leaves me wondering why philosophers seem to miss (or gloss over) what I see as important.

Of course, I understand that philosophers are interested in studying justified true belief.  However, to do that they ought to saying something about “justified”, about “true” and about “belief.”  There are theories of truth within philosophy, but to me they seem to be little more than exercises in circular reasoning.  There is also some sort of account of what “justified” entails, but this is what Gettier challenged.  In my view, Gettier’s challenge has not been satisfactorily answered.

As for “belief”, this is often described as an intentional attitude.  And that should make intentionality an important requirement for knowledge.  However, many philosophers say little about intentionality, other than that they see it as important.  Other philosophers deny the importance of intentionality, and consider it to be little more than a stance that we take.

Take a statement such as “Roses are red.”  This is often said to be a representation, because it represents something about the nature of roses (or of some roses).  You cannot have representations without a representation system.  There needs to be a system of conventions that connects the representation to what is represented.  In this case, there needs to be a categorization convention, such that the roses constitute that category.  And there needs to be a  naming convention, to assign the name “roses” to things in that category.  Likewise, there needs to be a categorization convention to select a category of hues, and a naming convention that assigns the name “red” to hues in that category.  Those conventions are, in effect, part of what defines the representation system in which we express statements such as “Roses are red.”

You could not have representations without a representation system.  Only by being systematic in how you represent, can you know what is represented.  And if there is now way of knowing what is represented by X, then it makes no sense to say that X is a representation.  The representation system and the representations are complementary to one another.

The growth of a person’s knowledge requires that the representation system be extended to as to allow new facts that were previously not representable.  It might also require that a person acquire new facts.  The extending of the representation system is something like perceptual learning.

The parable of the blind man and the cave was intended to illustrate the idea of a representation system.  For the blind man, the scaffold that be build in the cave formed part of the basis for his representation system, and that was what allowed him to know the shape of his cave.

Advertisements

21 Comments to “What is knowledge?”

  1. Neil,

    That commenter sounds like quite a trouble maker…hehehe.

    “Take a statement such as “Roses are red.” This is often said to be a representation, because it represents something about the nature of roses (or of some roses). You cannot have representations without a representation system. There needs to be a system of conventions that connects the representation to what is represented. In this case, there needs to be a categorization convention, such that the roses constitute that category. And there needs to be a naming convention, to assign the name “roses” to things in that category. Likewise, there needs to be a categorization convention to select a category of hues, and a naming convention that assigns the name “red” to hues in that category. Those conventions are, in effect, part of what defines the representation system in which we express statements such as “Roses are red.” ”

    I would argue that all of those things needed for a representation system, the categorization convention, naming convention, etc., are all “facts”. We’ve defined them to be what they are — that is “true”. For example, in order to “justifiably” say that “Roses are red”, we need to assume that we’ve defined what “Roses” are exactly, and we need to define “Red” (a somewhat arbitrary range of colors — which we may call “shades of red”) and determine if the roses’ color falls in that sub/category. However there are no clear boundaries of colors. I could look on a continuous visible light spectrum and ask where does “red” end and “orange” begin, or where does “purple” end and “red” begin? The fact that there is no clear boundary implies that there are red-orange mixtures throughout the transition from what we call “red” to what we may call “orange” (we may choose to call it “red” or “orange” based on which color we think is most dominant in the mixture — often there is disagreement here showing the lack of universality in color naming conventions). Since there is no objective way of determining what color we ascribe to objects (we tend to do it by sight alone rather than a more objective property like “reflected light wavelength”), there has to be an agreed upon convention in order to discuss these colors by name and understand what they should look like (at least most of the “less ambiguous” colors). In order to discuss the color of objects we also have to define or “know” what “color” is. This seems to be accomplished by experiencing different “colors” (a discrimination that our brain has developed — especially in our early childhood years — to help differentiate objects by one particular visual quality), and we need to have people who agree that we should call it “this color” or “that color” (which then becomes a “fact”). Later, if we see something that appears to be the same color as something we’ve previously agreed to call “Red”, we can do so based on that fact. So I see the naming conventions as a collection of facts (we agree what to call various colors), as well as the categories (we agree to call a specific visual metric – “color”). So in all, we had to learn what a “rose” was (or what we wanted to call “a rose”), we had to learn what “color” was, including what the specific color “Red” was. After learning these facts and the fact of how those facts relate to one another (that roses have a property such as color), we can say “Roses are red”.

    “The growth of a person’s knowledge requires that the representation system be extended to as to allow new facts that were previously not representable. It might also require that a person acquire new facts. The extending of the representation system is something like perceptual learning”

    Yes and I see that representation system extension as “acquiring new facts” (what we label things, agreed upon conventions, categories, gradients, etc.). I also see that there are certain facts which relate other facts (or involve a combination of other facts). This is one thing that seems to increase the context of otherwise meaningless information.
    I see perceptual learning as — again — just increasing the number of facts that relate to our perceptions. By experiencing something enough times, our sensory system can start to discriminate at a higher resolution (be it sound, color, taste, smell, etc.). If someone else has the same ability of discrimination, then you have someone who can confirm what you see and agree that some new property was found or hue of difference actually exists (that it is factual). Likewise, looking for something specific will modify how our attention span is being distributed.

    “For the blind man, the scaffold that be build in the cave formed part of the basis for his representation system, and that was what allowed him to know the shape of his cave.”

    I do see what you’re trying to say in this post. I agree that we do use representation systems of some form (we need this to describe properties of things that we can agree upon). But I believe that those representation systems are still just a collection of what we define to be “facts”. The blind man learned the fact (at some point) that the scaffold has the property of extending into space and ability to take just about any shape (which he could use to compensate for his lack of sight). He also learned that he could link/label the precise location of each scaffold constituent (relative to every other) to have a way of describing that extension in some way that he found useful (a 3-axis coordinate system). The fact that the scaffold extends into 3D space to take any shape, and the fact that it can be reduced into parts allowed him to use this scheme for acquiring specific knowledge about the cave. He wouldn’t have been able to do this if he didn’t have those facts. These facts may include but are not limited to:
    – what a cave is
    – what “shape” is (he has to know this in order to ask a meaningful question about the shape of the cave)
    – what a scaffold is
    – what is needed to assemble the scaffold (tools, instructions, etc.)
    – what a scaffold can be used for (to take shape of the cave)
    – what the relationship between “scaffold” and “shape” is (one can describe or mimic the other, the scaffold has the property of shape, etc.)
    – what a measurement is (shortest distance between two points, etc.)
    – what his unit of measurement is (feet, meters, etc.)
    – what 3D space is and how to represent it mathematically (3-axis coordinate system, measurement units, etc.)
    – what the relationship between “scaffold” and “3D space” is (one can be described in terms of the other)
    – how the aforementioned facts relate to one another (this creates new sets of facts based on those relationships, which are themselves facts)

    I think that if he acquires these facts (among many others including the definition of any words used within these facts or definitions), he would be able to gain the new knowledge he seeks (shape of the cave).

    Like

    • I would argue that all of those things needed for a representation system, the categorization convention, naming convention, etc., are all “facts”.

      So if I use pencil marks on paper as a way of making representations, will you say that the pencil and the paper are facts?

      I think you are distorting the usual meaning of “fact”.

      Like

      • Neil,

        “So if I use pencil marks on paper as a way of making representations, will you say that the pencil and the paper are facts?”

        I wouldn’t necessarily say that the pencil and paper are “facts” as much as they are mediums for storing or transmitting information — so we could call them “objects” or “mediums”, just as a voice is a medium to transmit information or just as a brain is a medium for storing information. I see the pencil and paper as only aiding in storing or transmitting information. I would say that our ability to use a pencil and paper to store or transmit information/symbols is a “fact” or a combination of “facts”. The agreed upon language used in that transcription is a collection of “facts”. Knowing how to use the pencil and paper to accomplish this task would also involve many “facts” (e.g. “a pencil can be used to write/copy information”; “paper can be used to view/store that information”; “spoken language can be stored in written form”; “the agreed upon symbols used to represent the words needed are…..”, etc.).

        “I think you are distorting the usual meaning of “fact”.”

        Facts (in philosophy) are considered to be: “something which is the case, that is, a state of affairs.” (Stanford Encyclopedia of Philosophy)
        Facts (in science) are usually considered to be “an objective and verifiable observation”.

        Those are pretty common definitions.

        I think my examples qualify for either definition.

        Like

  2. “Of course, I understand that philosophers are interested in studying justified true belief. However, to do that they ought to saying something about “justified”, about “true” and about “belief.” There are theories of truth within philosophy, but to me they seem to be little more than exercises in circular reasoning. There is also some sort of account of what “justified” entails, but this is what Gettier challenged. In my view, Gettier’s challenge has not been satisfactorily answered.”

    I do agree with you here. What is “justified”, what is “true” and what is “belief”? We need to know these things before we can confirm the validity of our arguments for or against the use of that term to describe what knowledge is. I will say that I liked Goldman’s causal theory (in terms of eliminating the Gettier cases). I do think that “justified true belief” is still the best we’ve got so far in terms of defining knowledge. I would say that people are looking for something more like “true belief that has 100% justification” — but there is no way of justifying anything 100% (not that I can see). So the closest we can get (it seems) is to say that knowledge is “justified true belief” and the quality or worth of that knowledge is proportional to the amount of justification. I’m in line with Socrates’ “I know nothing”. Perhaps this supposed belief of Socrates was based on the “fact” that nothing was 100% justified, and perhaps he saw knowledge as “100% justified true belief”. Who knows? In the case of the Gettier problems, we need to distinguish the actual state of affairs with what is claimed to be the “belief” — if we are to say that a belief is “true” but still fails to be knowledge (because the “true-ness” was a coincidence, etc.).

    Like

  3. I think an interesting question to ask, which has been contemplated many times I’m sure is “What is the difference between “knowledge” obtained via a “brain in a vat” scenario (BIV) vs. “knowledge” obtained the way we think it’s obtained (that our brain’s are actually in our bodies and we are experiencing a “true” human reality). I bring this up only because we are discussing knowledge = justified true belief. What justification do we have that any of our beliefs are true (if we are just BIV’s). I know many have contemplated this skeptic position, but if we take it seriously (I do, even though I don’t exactly think my “brain is in a vat”), then we realize that we have no reason to believe that any of our beliefs are justified. If it’s true that can’t know whether or not we’re BIV’s, then we must assume that either could be true. If we do this, then we have to assume that “knowledge” is compatible in both cases (since we’ll never know). That means we need a new definition of knowledge that incorporates this possibility (complete lack of justification, and potentially needing to refine the definition of “true”). Hmm…well now we’re getting somewhere. Perhaps…

    Like

    • I think an interesting question to ask, which has been contemplated many times I’m sure is “What is the difference between “knowledge” obtained via a “brain in a vat” scenario (BIV) vs. “knowledge” obtained the way we think it’s obtained (that our brain’s are actually in our bodies and we are experiencing a “true” human reality).

      A brain in a vat cannot have knowledge (in my opinion).

      Like

      • “A brain in a vat cannot have knowledge (in my opinion).”

        If that’s true, we may never be able to have knowledge — as we’ll never know whether or not we are BIV’s. I’m curious what your reasoning is behind your claim. Depending on how you define knowledge, I may agree with you that the BIV can’t have knowledge. The question then is, do you feel that you have any knowledge at all? If you do, then it is either an illusion of knowledge (possibly if we’re BIV’s) or we do have knowledge and need to redefine it to include the BIV scenario. Otherwise all we are left with is the possibility that we can’t have knowledge — yet with (at least) the illusion that we do have it.

        Like

        • Knowledge has to do with our interactions with the world. It is behavioral, not propositional.

          A brain in a vat does not have any behavior. Ergo, it cannot have knowledge.

          Like

          • “Knowledge has to do with our interactions with the world. It is behavioral, not propositional.
            A brain in a vat does not have any behavior. Ergo, it cannot have knowledge.”

            By that rationale, we may not be able to have any knowledge then as we’ll never know whether or not we’re BIV’s. It all comes down to how we define behavior, experience, reality, knowledge, etc. If you define knowledge as being behavioral, then you could say whatever “virtual behavior” we seem to experience in our minds (if we are just BIV’s) qualifies for “virtual knowledge” (virtual being the way of differentiating “real” from whatever the BIV’s experience). It’s as if we have to define everything in terms of the “real” world, and then re-define it in the “virtual” world. We could say that the only “true knowledge” would be that which is acquired by the mad scientist in her “real” world — but we can’t deny the fact that we seem to acquire some kind of knowledge (based on how many define it) in our world (whether it’s “virtual” or not). We could go one step further and say that the mad scientist is just another brain in a vat, further removed from the “real” world, making us experience a secondary virtual reality. This could go on ad infinitum. Is our knowledge any more or less existent then the mad scientist’s (if she was but another BIV — one level above us)? Would we be getting any closer to knowledge? I know it’s a silly question and a silly hypothetical, but it could be the case and the possibility suggests that knowledge may not be all it’s cracked up to be — or it may imply that knowledge (as you define it) doesn’t exist at all as we couldn’t have it without exhibiting “real world behavior” (as opposed to the “virtual behavior” that we may be experiencing).

            Like

          • By that rationale, we may not be able to have any knowledge then as we’ll never know whether or not we’re BIV’s.

            We know what a vat looks like. We know what we look like. Clearly we are not brains in vats.

            Presumably you are raising that as a metaphysical issue. We cannot resolve metaphysical issues. We can only deal with empirical questions. So we should ignore metaphysical issues as pointless. We clearly can have empirical knowledge.

            Like

          • “We know what a vat looks like. We know what we look like. Clearly we are not brains in vats. ”

            Yes I was implying that we are actually brains in vats, with electrodes connected to us by some scientist and a computer is sending impulses to mimic all sensory inputs — thus creating our reality. Theoretically it could be accomplished and we’d never know that the source of our reality is some brain in a vat. You can say “clearly we are not brains in vats”, but hypothetically that’s only because the scientist controlling the brain hasn’t programmed it to be aware that it is in a vat. It’s only programmed the brains do experience what we’re experiencing (illusion of a body, life, family, friends, anything physical for that matter). Yes, it becomes metaphysical, but it is based on ideas that are realistic (if the mad-scientist’s technology was sufficient enough it could work — as opposed to something completely made up by the imagination violating laws of physics, etc.). We can never say “clearly we are not brains in vats” unless by “we” you mean the illusory self that is experiencing a synthetic reality (that “we” sees ourselves as physical bodies). The same thing would apply if we were dreaming. If we were dreaming and in the dream you said “clearly I’m not lying in bed” would you be right or wrong? Your dreaming (virtual) self would be correct (you could be walking on a sidewalk for all I know), but the “real self” that is asleep having the dream IS actually lying in a bed (hypothetically). It’s just a thought experiment — as I said before, we can’t know anything except as Decartes concluded “I think therefore I am”. That is the only thing we can be sure of as we have no way of concluding that we aren’t being deceived in some major way — thus we have the potential loss of all justification of our beliefs. That is all I’m saying. It’s just something to ponder about. I don’t expect any major revelations to precipitate from such pondering as it is in the realm of the metaphysical. All people want to do is say, “assuming that we’re not brains in vats or being deceived, then…..knowledge is this, that, etc.”. I think people do that because it’s far easier to argue and contemplate issues when your idea of reality is better grounded. When that is lost, everything becomes much more complicated (which is stimulating in it’s own right).

            Like

          • Yes I was implying that we are actually brains in vats, with electrodes connected to us by some scientist and a computer is sending impulses to mimic all sensory inputs — thus creating our reality.

            Mimicking the inputs is not sufficient. You also have to deal with the outputs, and have the “right” relation between inputs and outputs. Berkeley’s mistake (with his idealism) was to assume that the inputs are sufficient. I’m skeptical that it could ever be done. I think it requires that the scientist be omnipotent.

            Theoretically it could be accomplished and we’d never know that the source of our reality is some brain in a vat.

            Perhaps in your theory, but not in mine. That’s where we disagree.

            Like

          • Look at it this way, if all we had was inputs that were fabricated by the scientist/computer, but the brain was still allowed to function as ours does, then we would still be commanding the outputs. The computer would see these impulses coming from the brain’s motor cortex and would create the sensation and appearance of those muscles moving. All the computer would have to do is send us inputs, allow our brain to function as it normally would in terms of computation as well as the resultant outputs desired — but rather than those outputs going to muscles etc., it would be translated into sensation, proprioception, the visual appearance of muscle movements, etc. This would create the illusion and would not require anything other than what I described (if you disagree then tell me why because I can’t see anything else required based on the assumption that the brain is the source of all our perceptions). All we’d need is a computer able to handle the data necessary to mimic sensory organs inputs and then translate the resultant naturally occurring brain’s motor cortex outputs — into more sensory inputs (the feeling and appearance of muscle movements, talking, burning, itching, etc.).

            An alternative scenario is that we’re dreaming and the scientist is able to control those dreams (prolong them and make them appear to be as real as possible), so we’d still feel that we have conscious will, etc., but it would only be a “mental world”, not a physical world. It doesn’t matter if we can scientifically explain how the scientist would do it, but at least a basic idea (computer and scientist controlling brain perception by mimicking impulses rather than sensory organs and muscles being the source of those impulses and results of those outputs — respectively).

            Like

          • Look at it this way, if all we had was inputs that were fabricated by the scientist/computer, but the brain was still allowed to function as ours does, then we would still be commanding the outputs. The computer would see these impulses coming from the brain’s motor cortex and would create the sensation and appearance of those muscles moving.

            The computer would have to simulate the entire universe. It is absurdly implausible.

            Like

          • “The computer would have to simulate the entire universe. It is absurdly implausible.”

            I don’t think you’ve thought this through as thoroughly as I have. It would not have to simulate the entire universe. It would only have to simulate one person’s experience (your’s) — because there’s no way of saying that every human is actually a brain in a vat — the only person that we (you or I) can say even exist are you and I. For our purposes, we only have to assume that ourselves exist, so in your case, the only data the computer needs is YOUR simulation. I guarantee that you haven’t experienced a universe worth of data.

            It would be clever for the scientist to create laws of physics (like gravity, even if that doesn’t exist in her world) and anatomical limitations (we can only move so fast from place to place) within our “virtual reality” to constrain us to this virtual planet and keep us from getting out too far or experiencing so many things so quickly. This would minimize the processing capability and data needed for the simulation (the scientist would only have to worry about simulating YOUR experience, and thus only the “areas in virtual space” that you’d encounter or come across (which would be on average about one six-billionth of the entire human virtual space encountered, let alone the entire universe). That would only require a finite amount of data (extremely small amount compared to the entire universe simulation that you suggest is needed). Your assumption is based on the premise that EVERYTHING that EVERY human thinks exists would have to be simulated. This is not the case as there is only ONE simulation needed (yours). For all you know, every other human you’ve encountered is just fabricated via the simulation. Only when you “explore new areas” within this virtual world would additional data be needed, and by the scientist controlling your sensory inputs, this could be minimized through constraining where you go (to use an extreme example, the scientist could simulate you locked up in jail to make it incredibly easy to simulate as you wouldn’t go very far). Less extreme examples can be used to constrain you just as was the case in “The Truman Show” if you’ve ever seen that film. If your sensory inputs are controlled and there’s enough knowledge of your brain chemistry (this brain in a vat) then free will is negated in several ways and this further constrains the possibilities needed in the simulation (only one is needed in principle).

            I have pondered over this idea quite a bit and realize that it’s not nearly as implausible as you think it is. It would require a miniscule amount of data relative to the entire universe (that we assume exists), and by knowing the causality of how your brain would respond to specific inputs, your experience would be controlled as one possibility without you knowing it — and thus even less data would be needed for the simulation itself. Even if the computer DIDN’T know for sure how your brain would respond, it could easily simulate something drastic like you fainting or slipping into a coma (for as long as needed) to solve the problem, and when you “wake up” time would continue right where it left off fulfilling the illusion of your fainting. It is moments like these (fainting, sleeping, coma, etc.) within this virtual reality that would allow the scientist to fix any problem that may arise for any uncertainty on how your brain would respond to various inputs. The scientist would presumably have an educated guess for how your brain would respond to most inputs, so these animation-suspension tools would only be used as needed to fill in the gaps while still maintaining the consistency, exclusivity, and priority that Wegner proposed (in his “Theory of Apparent Mental Causation”) was needed to fulfill the illusion of free will.

            Like

          • It would not have to simulate the entire universe. It would only have to simulate one person’s experience (your’s) — because there’s no way of saying that every human is actually a brain in a vat — the only person that we (you or I) can say even exist are you and I.

            But then every person that is part of my experience is part of the simulation. You are underestimating what is required.

            And if the entire universe of my experience is a simulation, then why consider it a simulation? Why not just say that’s the universe that I live in.

            The problem with speculative metaphysical thought experiments, is that they are pointless.

            Like

          • “But then every person that is part of my experience is part of the simulation. You are underestimating what is required.”

            Not at all. You are overestimating what is required. You are assuming that every person in this potential simulation is another brain in a vat. That would be even less plausible and would require even more data to accomplish. You only have to assume that you are the only brain in a vat and every “person that is part of YOUR experience is part of the simulation”.

            “And if the entire universe of my experience is a simulation, then why consider it a simulation?”

            Because there would be an actual physical universe that exists in the scientist’s reality. Her world would be the actual universe, and yours would be fabricated by a human, computer program, etc.

            “Why not just say that’s the universe that I live in.”

            You could say that, but you would be using the term “universe” in a way outside it’s normal use. You would be using the term just as someone who is daydreaming or playing a video game is “in their own universe”. I think that it would be similar to a figure of speech rather than what we mean when we speak of our actual physical universe. If you want to say that any virtual reality that we currently have physical access to in our universe (video games, etc.) is also considered it’s own universe, then you’d be correct. Otherwise, you’d have to better define the term universe. Can a universe lie inside another universe? If not, then you’d be wrong. I would think that a universe lying inside another universe would negate that the former isn’t actually a universe. It would be more like a dimension, galaxy, or subspace within the real universe — since universe seems to imply “the totality of everything that exists”.

            “The problem with speculative metaphysical thought experiments, is that they are pointless.”

            I don’t think they are. They are a good brain exercise and allow you to challenge certain ideas, just as I’m challenging your idea that my scenario is “absurdly implausible”. You hadn’t thought about it as thoroughly as I did, which means new ideas have been brought under discussion. It’s far from pointless. It may not give answers you like or think have any practical use but they still have some use nevertheless — even if nothing more than a brain exercise. Exercising the brain is of some use I believe. Perhaps you agree with that as well?

            Like

          • You are assuming that every person in this potential simulation is another brain in a vat.

            No, I am not assuming that.

            Since my experience only comes via the simulation, everything in my experience would have to be part of the experience. That includes people of my experience. But there’s no requirement that those people be brains in vats. The only requirement is that they be part of the simulation. They could be entirely fictional characters generated by the simulation.

            Because there would be an actual physical universe that exists in the scientist’s reality.

            But that is not relevant to the “brain in vat” thought experiment. That actual physical experience becomes part of the unknowable metaphysics of the world of “brain in vat”.

            Like

          • “No, I am not assuming that. Since my experience only comes via the simulation, everything in my experience would have to be part of the experience. That includes people of my experience. But there’s no requirement that those people be brains in vats. The only requirement is that they be part of the simulation. They could be entirely fictional characters generated by the simulation.”

            My main point is that it doesn’t matter (as much as you think it does) that there are people in your simulated experience. They aren’t going to require as much data as you do, as they could be fictional as you mentioned. All they need to do is fulfill the illusion that they are separate people with their own thoughts, etc. They don’t need any thought of their own. They only need to give certain responses based on your interaction with “them”, and appear to fulfill certain tasks within the virtual world. This could be accomplished in a number of ways. Regardless, the claim that you made “The computer would have to simulate the entire universe. It is absurdly implausible.” is simply incorrect. It would be so many orders of magnitude less data needed than you claim, and with certain constraints put into place, the amount of data needed is even less.

            Like

  4. I should stipulate that in the case of the BIV scenario, since it is in the realm of Solipsism, there is one justified belief — and it is in fact 100% true — that is the “cogito ergo sum” — “I think therefore I am”. If we consider this to be knowledge and we consider any other example of what we also consider to be knowledge, then it is apparent that we have a mixture of knowledge which is absolutely certain and that which isn’t. I know most do not define knowledge as being absolutely certain, but in our case we have one thing that is absolutely certain (perhaps we can call that “truth”) and everything else which isn’t 100% certain (perhaps we can call that knowledge after further clarification in defining it which remains the task at hand).

    Like

  5. Three escapes from this potential difficulty suggest themselves. One is to endorse a version of “conceptual [or functional] role semantics” according to which the representational status and content of a mental state is reducible just to facts about what is apt to cause and to be caused by the mental state in question—that is, to deny the relevance of remote evolutionary or learning history to mental representation as not part of a proper functional characterization (e.g., Harman 1973, 1987). Another is to accept that causal role determines the representational status of a mental state (i.e., that it is a representation) but does not fully specify representational content (i.e. how that representation represents things as being); but this seems to involve abandoning full-blown functionalism. A third is to interpret more liberally what it is for a mental state to be “typically caused” (or perhaps “normally caused”) by some event or state of affairs: Perhaps it is enough that in the young organism, or its evolutionary ancestors, mental states of that sort were caused in a particular way, or the system was selected to be responsive to certain sorts of environmental factors. Such claims may be more easily reconcilable with certain canonical statements of functionalism (such as Lewis 1980) than with others (such as Putnam 1975). Although one might suspect that most functionalist representationalists would choose the last of these three options, the issue has not been as fully discussed as it should be.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: