Symbols and categories

by Neil Rickert

In earlier posts, I have preferred the Shannon notion of information, according to which information is a sequence of symbols.  And I have emphasized that symbols are abstract objects.  The symbols are usually considered to be intentional objects, because it is only on account of our intentions that we consider them to be symbols.

In this post, I want to relate the idea of symbol with that of category.  I’ll start by assuming that the readers have at least an informal idea of what we mean by category.

Symbols

Mathematics is one of the disciplines that we think of as symbolic.  And the simplest symbols used in mathematics are the ones that we call numerals or numbers.  Physicalists and materialists sometimes like to say that numbers are ink marks on paper.  But that doesn’t actually work very well.  For example, both “3” and “3” will be seen as the same symbol, the number three, even though they are different shaped marks (whether ink marks or screen markings on your monitor).  And it is not just those two shapes.  There are many different typographic fonts.  And then there are handwritten numbers.  If we think of all of the different possible ink marks that we would recognize as the number three, then we can consider all of those possible marks to constitute a category.  When we look at one of those marks, our first step is to recognize which category it belongs to, the category of marks that we will consider to be a three, or the category of marks that we will consider to be some other symbol.

The basic idea here, is that what we consider a symbol is not really a simple physical mark, but a category of such physical marks.  Our first step in recognizing a symbol involves categorization, or identification of the appropriate category.

These days we do a lot of our computation using our electronic computers.  And those computers are also engaged in categorization.  We might describe a logic gate as having input symbols (binary 0 or 1), and producing output symbols.  But physically, what the computers do is electrical.  If our logic chips are made with CMOS technology, using a 5 Volt power supply, then the electronics is designed to treat an input between 0 and 2.5V as a binary zero, and an input between 2.5 and 5 as a binary 1.  So there is a range of inputs that would be considered a zero, and another range that would be considered a 1.  The electronic device has to decide which of those ranges applies.  And that, in effect, is categorization.  The electronic device is deciding to which category, a binary 0 or a binary 1, that the input belongs.  And the action of the logic gate then depends on how it has categorized its input.  As you might guess, there can be some ambiguity in whether a particular input is a binary 0 or a binary 1.  However, the electronic computer is cleverly designed so that important decisions are only made at instances when there is  no ambiguity.

Much the same is true for the magnetic encodings on our disk drives.  One of the steps that the disk controller must take, when reading a disk record, is categorize small segments of magnetic encoding to decide whether to consider them to be 0 or 1 symbols.

Moving away from computers, we can consider the stop signs on our roads.  There is a variety of different shapes and markings that we will consider to be a stop sign.  Our first step, in seeing a road sign, is to categorize it as STOP or perhaps some other traffic sign.

The basic idea that I am arguing, then, is that symbols result from categorization, so that symbols are categories.  A symbol might be a category of ink marks on paper, but it could not be a simple (i.e. uncategorized) ink mark on paper.

Categories and categorization

We started looking informally at how we use categorization to identify symbols.  Let’s now turn our attention to what categorization amounts to.  Unfortunately, the traditional views of categorization are somewhat confusing.  It is often said that we group things together because they are similar, and that doing such grouping is categorization.  However, it is also often said that categorization is cognitively important.  Those two views seem to contradict.  Determining similarity is already a complex cognitive operation.  So how can grouping, based on similarity, be an important part of cognition if it assumes that cognitive abilities are already present?

An alternative account of categorization, is that it is carving up the world at its seams.  That’s rather better.  But it is still not quite correct.  For the way that we carve up into categories often does not depend on anything that could be consider a seam.  For example, the categorization of input to a CMOS logic chip into a binary 0 or a binary 1 is really based on a rather arbitrary engineering choice.  Different logic chip technologies use different choices on how to carve up the input range.

The view that I want to take is that categorization is carving up the world (or the inputs).  But how we do that carving is arbitrary and capricious, based mainly on pragmatic considerations.  When we come up with a method of categorizing, we want our categorization to be reasonably reliable.  That is to say, if we repeat it many times, we should usually get the same result.  But we don’t expect it to be perfect.  We recognize that there can be ambiguities in practical circumstance.  For example, you might have difficulty deciphering the handwritten “3” from some people.

In addition to the categorization being reasonably reliable, we also want it to be useful.  There isn’t much point in categorization, if the result is of no use to us.  The requirement of usefulness is why I say that our choice of how to categorize is pragmatic.

Computation

It is typical to define computation as operations on symbols.  We often define computationin terms of the Turing machine.  And a Turing machine is usually defined as an abstract symbolic machine.  If thus defined, computation does not require categorization.  However, the practical use of computation does.  As discussed above, our electronic computers use categorization in their internal operations.  We often describe it as if symbolic, but it is the categorization that allows us to have practical physical computers, rather than just abstract symbolic machines.

In typical use, we feed data into the computer inputs.  The data itself depends on categorization.  Measurement is a kind of categorization.  If I say that my desk is 30.5 inches high, then I am saying that it is between 30.45 and 30.55, which places it in a category of idealized heights.  When we compute with real world data, categorization is prior to computation.

Hanging chads

During the vote count following the 2,000 USA presidential election, there was a lot of talk about “hanging chads.”  Some people were puzzled.  Mathematics is supposed to be perfect, so how could counting go wrong?  If we take counting to be a symbolic operation, then we can see that the problem was with the categorization, rather than with the counting.  The first step in examing the vote was to categorize it as to which candidate should have received the voted.  And it was in that categorization step that the problem with hanging chads showed up.

As indicated above, categorization should be reasonably reliable.  Normally, the use of punch card ballots is fine.  The categorization is not perfect.  However, mistakes in categorizing a few votes won’t be important unless the election is very close.  The problem in the Florida election tallying, was that the vote was very close, and the reliability of the categorization was not quite enough to easily handle such a close vote.

3 Comments to “Symbols and categories”

  1. “The view that I want to take is that categorization is carving up the world (or the inputs). But how we do that carving is arbitrary and capricious, based mainly on pragmatic considerations. ”

    I share a similar point of view with “non-separability” in the universe. Arbitrary categorization is just another illusory “necessity” we perform because it appears to be the most pragmatic approach in a world seemingly full of separateness.

    Whether people are claiming that solid atoms exist (by categorizing all constituents of matter as “solid atoms”), or that red apples exist, they only do so for pragmatic purposes (e.g. “solid atoms” fit into a particular model which describes the universe in one way). A person who has been blind since birth certainly doesn’t categorize apples by color, because color doesn’t even exist in their categorical repertoire (although the category “apples” does). They “see” things in a different way (so to speak, since they can’t “see” anything at all). The blind may categorize apples by not only calling them “apples” but by further categorizing that apple by differences in the tactile/haptic, olfactory, gustatory (and echoic) experience. I think it’s interesting to see how categorization differs for those with fewer physical senses. Helen Keller for example could not hear nor see, so the only way she could categorize was through tactile, olfactory, and gustatory means. If we further remove senses so that only tactile perceptions exist, how would categorization of the apple differ? My guess is that the tactile categorization would utilize an extremely high resolution, allowing this mostly “senseless” person to categorize things based on extremely slight differences in touch (such that a person with all of their senses may not even be able to detect) because it would be the only means for detecting differences from one object to another. So one person may primarily categorize this object as different from others because it is a fruit, is red in color, tastes “this” sweet, and produces a loud crunch upon biting into it. The other may primarily categorize the object in terms of how it differs with regard to texture, shape, hardness, elasticity (e.g. tensile strength, malleability, ductility, etc.), center of mass, weight, etc. Both of these types of categorization are based on pragmatic purposes (e.g. if the desire for differentiating one apple from another, or an apple from another object, exists). They are based on the limitations in how they can perceive the object, but also on what they value as being unique or “different enough” (i.e. if new properties are defined). The level of grading within categorization is what I think is the most arbitrary (as opposed to the arbitrary nature in selecting physical representations for the symbols themselves, like an octagonal metal “stop sign”). Some people may be able to see two roses, each with a different shade of red, but if asked “what’s the difference between the two?”, they may say “I see no difference”. This may be because they don’t think the two shades are different enough to be called “different”, or it may be for some other reason. The point is that “how we do that carving is arbitrary and capricious”. You may think this is so for different reasons, but I agree nevertheless.

    “The problem in the Florida election tallying, was that the vote was very close, and the reliability of the categorization was not quite enough to easily handle such a close vote.”

    I’d go so far as to say that the validity of the election results were also poor. Those in a lower socio-economic class may not have nearly the same number of opportunities to vote. So the election becomes skewed towards going in favor of those that have more opportunities to vote.

    Clearly we use categories such that we can provide a maximum amount of structured information while utilizing the least amount (or relatively small amounts) of cognitive effort. We need to reduce the infinite amount of information (albeit not Shannon) such that cognition and behavior are mediated in a finite way. Our brain seems to absolutely love doing this!

    “The basic idea that I am arguing, then, is that symbols result from categorization, so that symbols are categories.”

    I do agree with you that symbols are categories. Any symbol can be categorized based on the properties of its physical representation as well as its intended meaning.

    Like

    • I’d go so far as to say that the validity of the election results were also poor.

      I agree with that. However, I was only using the example to illustrate categorization.

      Like

      • I know. I was just commenting on the example. For that matter, the electoral college process also utilizes a categorization of citizen votes into groups of electors, which hinders true democracy. If a majority of citizens vote for a president, it shouldn’t matter if they are represented by 49% or less of electoral votes. You threw that example out there, so I’m just having a little fun with it. (:-D)

        Like

%d bloggers like this: