On truth (4): Truth and language

by Neil Rickert

In the previous post in this series, I suggested that we don’t need a concept of truth for representations that we ourselves form.  Somebody living alone as a hermit would not need to be concerned about truth.  Any method of forming a useful representation will be based on some correspondence between reality and the representation so formed.  And as long as we interpret a representation using the same correspondence, we will be fine.  That consistency between forming the representation and interpreting that representation breaks down as soon as we start to use a public language.

Within a language community, one person forms a representation of reality in linguistic form perhaps by describing some event.  Then other persons have to interpret that representation.

There is no possibility of perfect consistency between describer and interpreter.  The description is made using one person’s perceptual system, and then listeners have to interpret that description based on how they connect that description to their own perception of reality.

In order for language to work at all, there needs to be at least an approximate consistency between describer and listener.  When there is a insufficient consistency between describer and listener, this shows up as a disagreement.  And we use our concept of truth to express our disagreement.  In a language community we need truth for two main purposes.  We need it to express disagreement, when there is a failure of consistency between describer and listener.  And we need it, typically by arguing over truth, to help us align our correspondences with one another.

We can think of a language community as somehow adopting a system of conventions that express the correspondence to be used between reality and language descriptions of reality.  This system of conventions becomes, in effect, the system of meaning conventions of the community.  The conventions used are mostly informal, so cannot be written down.  Reaching complete agreement on the meaning conventions is likely to be impossible, because of the difficulties that Quine explains in his argument on the inscrutability of reference.  Our arguments over the truth of description are part of how we do come to at least an approximate agreement.

21 Responses to “On truth (4): Truth and language”

  1. I’m inclined to think that even the hermit will need a notion of truth. Perhaps this is easiest to see if we look at it from the flip-side: She’s going to be in trouble if she doesn’t have a notion of falsity. She’d better be able to say, “Ooops! I was wrong about that!” and revise her beliefs. So if she’s got an account of false beliefs, presumably she’s going to have an account of true beliefs as well.

    I’ve long been inclined to view truth as nothing but successful reference. Falsity is an attempt at reference that fails. (Going down this road quickly lands one in debates about whether concepts or propositional attitudes are more fundamental, but that’s a price I’m willing to pay.)

    I’m now wondering whether that’s enough to capture the iterative aspect of truth that you’re pointing to here (and that comes out in the disquotational account). I’m inclined to think it might; after all, an animal won’t generally be thinking about whether its neural states refer to the world; it will just act on those representations. But once it considers the belief itself, it makes sense to ask whether it succeeds or fails to refer (i.e., whether it is true or false).

    Like

    • If you put that in terms of beliefs, then you are still connecting it to language. Likewise, you need a notion of truth for logic, which also connects it with language.

      You do make a good point that my example of a hermit was flawed, since a hermit has usually been in contact with society and has a language. Perhaps I should have use a feral child as the example.

      Like

  2. It’s a mistake (and a fairly common one) to think that all beliefs are linguistic.

    At their core, beliefs are mental states that can be true or false and that have aspectual shape.

    It seems obvious to me that dogs (for example) have beliefs, but they have no language. Dogs can be right or wrong about how the world is. (My favorite example is the time I came home early and my dog was barking at me as I came up to the door because he thought I was the mail carrier. You should have seen the look on his face when he saw it was me opening the door. He definitely had a false belief.)

    I claim that the same point will hold for the feral child. She might not develop the concept of truth and falsity, but she would certainly find it a useful concept even in the absence of a linguistic community.

    Like

    • It’s a mistake (and a fairly common one) to think that all beliefs are linguistic.

      It is hard for me to come up with anything that is a not a representation of some kind, that I could consider to be a belief.

      At their core, beliefs are mental states that can be true or false and that have aspectual shape.

      I’m not into reading minds. I am inclined to think that the “belief story” of cognition is mostly nonsense. At my core, I have very few beliefs. I do frequently acquire temporary beliefs for particular purposes, but those quickly disappear when the need has passed. My knowledge is mostly in the form of abilities, and I cannot find a way of applying true/false criteria to those abilities.

      Obviously, we look at this very differently. My way of looking at it probably reflects that I am a mathematician.

      Like

      • It is hard for me to come up with anything that is a not a representation of some kind, that I could consider to be a belief.

        Of course beliefs are representations, but that doesn’t mean they have to be formulated in language. Close your eyes and imagine the contents of the room you’re in. You typically won’t do this my having an inner dialogue saying “There’s a chair next to the wall; the desk is under the window . . . ” and so on. Instead you’ll have non-linguistic (quasi-visual, in some cases) representation.

        It seems obvious to me that representations like these can be true or false. If you try to walk across the room and you run into a glass wall that you never suspected was there, then you were wrong about the contents of the room (even though you never said to yourself — in so many words — “There are no solid objects between me and that door over there”)

        But here I’m in the philosophical minority, so perhaps it’s best to set this to one side.

        However, your next claim does run against pretty basic philosophical orthodoxy:

        I have very few beliefs. . . . My knowledge is mostly in the form of abilities, and I cannot find away of applying true/false criteria to those abilities..

        I’m inclined to think that you’re using the word “belief” in a rather unusual way here.

        Most epistemologists are going to say that knowledge is a subset of beliefs. So anything you know is also something you believe. Thus you believe that your key fits the lock, you know what you had for dinner last night, you know your best friend’s name, and so on, and so on — indefinitely.

        Your appeal to non-representational abilities reminds me of John Searle’s account of the “background” of intentionality. He argues that a background of abilities is required for a belief to be intentional (i.e., to be about something). He discusses this in _The Rediscovery of the Mind_ if I recall.

        But Searle is still going to insist that you have lots and lots of beliefs, and that every instance of knowledge is itself an instance of belief. Indeed only a very hard-core eliminative materialist would deny this, it seems to me (and I suspect that even Dennett and Churchlands wouldn’t go that far).

        Would you deny that creationists have lots of false beliefs about evolution, but lots of true beliefs about the contents of their houses? It seems to me that if you deny this you’re not using the words the same way that I (and most others) do.

        Like

      • It seems obvious to me that representations like these can be true or false.

        I agree with that, but I think it misses the point. When I do that sort of visualization, I am not normally asking myself whether it is true. So it seems to me that my ability to use that visualization does not depend on my having a notion of truth.

        I’m inclined to think that you’re using the word “belief” in a rather unusual way here.

        By contrast, I tend to think that philosophers use it in an unusual way. I’ve been involved in these kinds of discussion before. The scientists and mathematicians generally deny that knowledge is in the form of beliefs. Perhaps this disagreement is part of the “Two Cultures” division.

        But Searle is still going to insist that you have lots and lots of beliefs, and that every instance of knowledge is itself an instance of belief.

        I am inclined to doubt that. Searle discusses learning to ski on around page 150 of “Intentionality.” On his account we might begin with beliefs. But as we learn, we develop causal connections that make those beliefs superfluous. He is explicitly arguing against the view that the beliefs remain but become unconscious beliefs.

        Would you deny that creationists have lots of false beliefs about evolution, but lots of true beliefs about the contents of their houses?

        I agree that they have lots of false beliefs about evolution. I deny that they have lots of true beliefs about the contents of their houses. They might have a few, such as “the sugar is low; I must remember to buy some.” But most of their knowledge about their houses is not in the form of beliefs.

        Like

  3. “So it seems to me that my ability to use that visualization does not depend on my having a notion of truth.” – Maybe you don’t have the language notion of truth, but for animals I’d still say there is some representation of truth – a correspondence. Physicalist’s dog example seems a good one.

    Following on from my comment on part (3) on religions: they seem to be able to maintain multiple ‘truths’, across individuals in the same language community, and in the same individual over short periods, to the extent that they can be observed expressing multiple ‘truths’ concurrently.

    Could you expand on “But most of their knowledge about their houses is not in the form of beliefs.”

    Like

    • Physicalist’s dog example seems a good one.

      One problem here is that we don’t really have a good account of what a belief is. Some people say that a belief is a disposition to behave. And, sure, dogs are disposed to behave in various ways.

      When we get to “truth” it seems the picture is a bit different. As normally understood, truth is a property or condition of a proposition. So unless the dog is consciously posing propositions and assessing the truth of those propositions, I don’t see that it is using truth. It might be describable as if it is using truth. But that’s a different matter entirely, for that only depends on our having a notion of truth as we form descriptions. For that, there is no requirement that the dog actually have a notion of truth.

      Like

    • Could you expand on “But most of their knowledge about their houses is not in the form of beliefs.”

      I see knowledge as mostly being part of what Searle refers to as “the background”, our abilities to have beliefs and perceptions. I’m also reasonably comfortable with how CI Lewis viewed knowledge, as shown here. For that matter, on my reading of John Locke, he viewed knowledge as conceptual rather than as representational.

      As an educator (mathematics and computer science), it is my experience that the students who try to master the subject by acquiring beliefs will do poorly, while the students why attempt to master concepts and methods will do well.

      Like

  4. “When we get to “truth” it seems the picture is a bit different. As normally understood, truth is a property or condition of a proposition.”

    Only in the context of propostitions, which are language constructs. At a more funamental level of information we have true/false, 0/1. But ‘truth’ the context of propositions is still a logical value, a piece of information – the proposition is true, or not.

    “I don’t see that it is using truth.”

    The dog isn’t using propositional language, but the dog’s brain is busy using logic all the way down. It is we that classify complex use of logic in more vague terms and use its basic principles at higher levels, such as in proporistional logic. It’s still logic. Logical syllogisms can be reduced to combinatorial logic terms. It’s the messiness of the language that causes the confusion.

    This comes down to the messiness of classification, which using set theory is aa pretty logical account of classification. But then try this…

    We are humans, Homo Sapien. Other extinct homoniods are not classified as Homo Sapiens. But, using the idea proposed by Richard Dawkins, imagine a chain of human females holding hands, daughter, to mother, to grandmother, …, all the way back through evolution to one of our non-human ancestors. Every one of them was able to give birth to their own daughter, so at no time was one mother of a different species than her own daughter. And yet, according to our classification, our set theory, our logic, we are human, but some distant ancestor is not. Every change in DNA involved some minute distinction – either the genes where identical or they were not. At a finer scale, this base changed or it did not. Logic all the way down; distinction, accumulating into measureable difference.

    “For that, there is no requirement that the dog actually have a notion of truth.”

    Didn’t say it did. But its brain performs logical operations, whther the dog knows it or not. Yes, this notion of logic too is a human construct. But so what? If we’re not careful we become as paranoid as Wittgenstein and fear every use of language. It seems enough to say that our abstract notion of logic is merely how our brain patterns align themselves in some rather vague way with patterns it sees in nature.

    Remove language, even of logic, and just think of it as biological soup that tends to take on a form that has some correspondence with the outside world. But it’s doing this continuously throughout the life of the individual. And of course its a lot more strutured than a soup, and yet not so structured that it’s rigid. many neurons may use many of their synapses in constructing a single concept, and a single neuron may be co-opted in the formation of many concepts. It’s the complexity that makes it hard to discern how the minute details of physics, chemistry, biology (the reductionist work) builds to form the conceptualising self-aware system that is the brain, that mirrors so much of its environment in internal patterns. It just gets on with it with or without our interest – as it does in what we consider non-self-aware animals. Our conscious self-aware bit sat on top of all this activity doesn’t really know what’s going on, when sel-reflecting. It’s the cumulative efforst of many humans over centuries that has started to accumulate an external body of knowledge that we can all dip into that is really exposing what is going on. Personally, left alone without any educational resources, we would know no more than the dog does about how we or it does things – though unlike the dog we might, as our ancestors did, start to wonder about it.

    Like

  5. Thanks for the link to Lewis. I’m in general agreement with the Pragmatists on some of their philosophy, but they lacked a lot of what we now know of the brain, so to be fair to them they couldn’t have put any of the Kandel (same link as on another comment) stuff to use to figure out what knowledge actually is when encoded in the brain.

    Searle’s point too is about the wider collection of bits of knowledge that we have access to. But note that it isn’t reliable. We see this in examinations – in the exam we’fail to answer a question, only to come out of the exam to suddenly remember the answer. This is the messy complexity of the human brain. So, there’s this conflation of meanings of ‘knowledge’: the collection of information that we have, withing us, or external to us, whther accessible or not, and the detailed aspects of individual bits of information and how they are transformed from patterns of experience to patterns of brain states.

    Here’s an example of how messy it is. Suppose this is some capacity of aprt of the brain, some simplistic representation of possible synapses:
    _ _ _ _ _ _ _ _ _ _ _ _

    I want to store some complex facts, of which these are only a part:

    _ 2 _ 2 _ 2 _ 2 _ 2 _ 2

    _ _ 3 _ _ 3 _ _ 3 _ _ 3

    Together in the brain, applied in order

    _ 2 3 2 _ 3 _ 2 3 2 _ 3

    Both bits of information are present to some extent. Another point of memory is that concepts are often re-constructed as they are remembered (it’s not necessary that the complete encoded mamory exist in the brain), but let that go for now.

    I recall the first fact, as one of these:

    _ 2 _ 2 _ 2 _ 2 _ 2 _ 2 – perfect recal
    (with reconstruction from redundancy elsehwere)
    _ 2 _ 2 _ 3 _ 2 _ 2 _ 3 – False bits added to my memory
    _ 2 _ 2 _ _ _ 2 _ 2 _ _ – gaps in my memory
    _ 2 _ 2 _ 3 _ 2 _ 2 _ _ – gaps and addition in my memory

    OK, so I don’t have perfect recall. Who does? Do we really think our memory is that photographic?

    But note that any one component of it is digital at heart (here 4-bit binary would deal with each location).

    So, for Searle to talk of knowledge in some vague high level sense doesn’t do justice to what is happening in a physical brain. It’s this disconnect between the higher level language and philosophical concepts that we used and the lower level activity of the brain, that makes us make mistakes, like saying that the anthropomorphic “we” make decisions, as if there is some mysterious dualism or soul, or even some corporeal ‘self’ that exists somehow that is not a very physical implementation in the brain. Ultimately it boils down to brain bits (bits in the sense of part, and bits in the sense of binary bits, on whatever scale of implementation they exist).

    Like

  6. Take this from the Lewis article:

    “While this meaning is independent of whether or not you are opening a can of cat food her expectation will be confirmed if the can contains cat food and disconfirmed if it doesn’t.

    Meaning in this sense of empirical significance could only be available to a creature who can act in anticipation of events to be realized or avoided. Accordingly, the possible is epistemologically prior to the actual. Only an agent, for whom experience could have anticipatory significance, could have a concept of objective reality as that which is possible to verify or change.”

    First, ‘meaning’ is already too high a level term. Just think of various types of animal.

    1) Simple passive responsive. Detects only food. Doesn’t even acknowledge the significance of a tin being opened. You drop the food on the floor. It bumps into the food and consumes it. It need only have chemical sesors that respond to food and cause chamical-physical actions. Bacteria work like this. They don’t need a central nervous system.

    2) A central nervous system that cant use various sense – e.g. light – to recognise food at a distance and move towards it. Rudamentary expectation and action. Assuming it retains earlier chemical triggers, so when it touches food it eats it. Turn off the lights and this creature reduces, effectively, to (1).

    2b) Same as (2) but, lights on, drop fake food. It sees the food and moves towards it, but on touching doesn’t recognise it as food. Now this is where the nervous system gets interesting. Does it just move on? Or is there a deadlocked action – sight causes it to move to the fake food, taste makes it move on. The animal is trapped. What if one of its companions has a nervous system that is dominated by movement from hunger over movement from sight. This second animal has stong humger pangs driving its motion that make it move as long as it is not consuming food. The sight mechanism is overridden and the animal lives to bread again. Natural selection acting to drive the evolution of very simple logical systems.

    All simplistic mechanisms so far.

    3) Anticipation of unseen food. The animal can recognise containers. But in nature not all containers contain the food one expects – they are often already empty, or contain bad fruit. Natural selection favours complex systems that can anticipate content and yet deal with and move on from failure or an anticipation. All this is possible without any higher level cognition associated with language propositions or anything that sophisticated. The cat just anticipates and learns to cope with anticipation and failure.

    Anecdotally, my cats tell me when they are hungry. I’ve just been away over Christmas and a neighbour has fed my cats. First day back my neighbour came over to see me, to ask about the holiday and so on. My cats went straight to her. The hungry one, who always wants food and who usually sits beside my while I work, hadn’t seen me for two weeks. Did I get a greeting? No way. He went straight onto my neighbour’s lap. No complex human sentiment of family, or who’s his best friend or any of that crap. He’s a cat. He wants food. Which other creature fed him well most recently? That’s how sophisticated a cat is.

    We are fooled by the levels of extra abstraction that we have built on top of natural behaviour. It doesn’t mean that we don’t rely on this basic behaviour ourselves. Our higher levels of cognition help us to plan further ahead and deal with more complexity than all otehr animals we observer. But let’s not fool ourselves that this is not based on the most basic principles of the laws of nature.

    Like

  7. “That’s what you think the dog’s brain is doing. I happen to disagree.”

    So, what do you think is going on in a dog’s brain – or in any brain for that matter? What mechanisms do you think are employed, at any level, that cannot be reduced to the dynamic matter in action that is forming components like neurons and where those neurons have ongoing processes that are decision making sub-systems?

    “How is that even possible?” – I’m not sure what you are questioning. I’m defining some simple animal that can detect food but can’t anticipate it in the sense that a cat can – as described in the Lewis article.

    “Why is the dog detecting food, as distinct from detecting a 0/1 condition on a neuron?”

    Again, I’m not sure what you’re questioning. When a dog (or cat) detects food, say by sight, it’s brain is recognising a pattern. How does that not involve the operation of many neurons? How does each neuron fire or not (i.e. a logical detection) according to the weighted inputs? Where do you think the disconnect is between massively complex cognitive operations and logical decisions at the level of the neuron as a whole or any of its components? At what point does the basic mush of information processing low down change into something different at higher levels?

    Like

  8. So, what do you think is going on in a dog’s brain – or in any brain for that matter?

    Measurement.

    Since you don’t understand the difference between logic and measurement, I guess you won’t see that as making any difference.

    Like

  9. Logic is a component of measurement. When you measure you compare. Comparison is a logical operation. In what way is that wrong?

    Like

  10. Even that’s a logical operation. They just don’t think of it as that. Give me a comparison measurement that doesn’t have a logical operation at its core.

    Like

Trackbacks