In earlier posts, I have preferred the Shannon notion of information, according to which information is a sequence of symbols. And I have emphasized that symbols are abstract objects. The symbols are usually considered to be intentional objects, because it is only on account of our intentions that we consider them to be symbols.
In this post, I want to relate the idea of symbol with that of category. I’ll start by assuming that the readers have at least an informal idea of what we mean by category.
Symbols
Mathematics is one of the disciplines that we think of as symbolic. And the simplest symbols used in mathematics are the ones that we call numerals or numbers. Physicalists and materialists sometimes like to say that numbers are ink marks on paper. But that doesn’t actually work very well. For example, both “3” and “3” will be seen as the same symbol, the number three, even though they are different shaped marks (whether ink marks or screen markings on your monitor). And it is not just those two shapes. There are many different typographic fonts. And then there are handwritten numbers. If we think of all of the different possible ink marks that we would recognize as the number three, then we can consider all of those possible marks to constitute a category. When we look at one of those marks, our first step is to recognize which category it belongs to, the category of marks that we will consider to be a three, or the category of marks that we will consider to be some other symbol.
The basic idea here, is that what we consider a symbol is not really a simple physical mark, but a category of such physical marks. Our first step in recognizing a symbol involves categorization, or identification of the appropriate category.
These days we do a lot of our computation using our electronic computers. And those computers are also engaged in categorization. We might describe a logic gate as having input symbols (binary 0 or 1), and producing output symbols. But physically, what the computers do is electrical. If our logic chips are made with CMOS technology, using a 5 Volt power supply, then the electronics is designed to treat an input between 0 and 2.5V as a binary zero, and an input between 2.5 and 5 as a binary 1. So there is a range of inputs that would be considered a zero, and another range that would be considered a 1. The electronic device has to decide which of those ranges applies. And that, in effect, is categorization. The electronic device is deciding to which category, a binary 0 or a binary 1, that the input belongs. And the action of the logic gate then depends on how it has categorized its input. As you might guess, there can be some ambiguity in whether a particular input is a binary 0 or a binary 1. However, the electronic computer is cleverly designed so that important decisions are only made at instances when there is no ambiguity.
Much the same is true for the magnetic encodings on our disk drives. One of the steps that the disk controller must take, when reading a disk record, is categorize small segments of magnetic encoding to decide whether to consider them to be 0 or 1 symbols.
Moving away from computers, we can consider the stop signs on our roads. There is a variety of different shapes and markings that we will consider to be a stop sign. Our first step, in seeing a road sign, is to categorize it as STOP or perhaps some other traffic sign.
The basic idea that I am arguing, then, is that symbols result from categorization, so that symbols are categories. A symbol might be a category of ink marks on paper, but it could not be a simple (i.e. uncategorized) ink mark on paper.
Categories and categorization
We started looking informally at how we use categorization to identify symbols. Let’s now turn our attention to what categorization amounts to. Unfortunately, the traditional views of categorization are somewhat confusing. It is often said that we group things together because they are similar, and that doing such grouping is categorization. However, it is also often said that categorization is cognitively important. Those two views seem to contradict. Determining similarity is already a complex cognitive operation. So how can grouping, based on similarity, be an important part of cognition if it assumes that cognitive abilities are already present?
An alternative account of categorization, is that it is carving up the world at its seams. That’s rather better. But it is still not quite correct. For the way that we carve up into categories often does not depend on anything that could be consider a seam. For example, the categorization of input to a CMOS logic chip into a binary 0 or a binary 1 is really based on a rather arbitrary engineering choice. Different logic chip technologies use different choices on how to carve up the input range.
The view that I want to take is that categorization is carving up the world (or the inputs). But how we do that carving is arbitrary and capricious, based mainly on pragmatic considerations. When we come up with a method of categorizing, we want our categorization to be reasonably reliable. That is to say, if we repeat it many times, we should usually get the same result. But we don’t expect it to be perfect. We recognize that there can be ambiguities in practical circumstance. For example, you might have difficulty deciphering the handwritten “3” from some people.
In addition to the categorization being reasonably reliable, we also want it to be useful. There isn’t much point in categorization, if the result is of no use to us. The requirement of usefulness is why I say that our choice of how to categorize is pragmatic.
Computation
It is typical to define computation as operations on symbols. We often define computationin terms of the Turing machine. And a Turing machine is usually defined as an abstract symbolic machine. If thus defined, computation does not require categorization. However, the practical use of computation does. As discussed above, our electronic computers use categorization in their internal operations. We often describe it as if symbolic, but it is the categorization that allows us to have practical physical computers, rather than just abstract symbolic machines.
In typical use, we feed data into the computer inputs. The data itself depends on categorization. Measurement is a kind of categorization. If I say that my desk is 30.5 inches high, then I am saying that it is between 30.45 and 30.55, which places it in a category of idealized heights. When we compute with real world data, categorization is prior to computation.
Hanging chads
During the vote count following the 2,000 USA presidential election, there was a lot of talk about “hanging chads.” Some people were puzzled. Mathematics is supposed to be perfect, so how could counting go wrong? If we take counting to be a symbolic operation, then we can see that the problem was with the categorization, rather than with the counting. The first step in examing the vote was to categorize it as to which candidate should have received the voted. And it was in that categorization step that the problem with hanging chads showed up.
As indicated above, categorization should be reasonably reliable. Normally, the use of punch card ballots is fine. The categorization is not perfect. However, mistakes in categorizing a few votes won’t be important unless the election is very close. The problem in the Florida election tallying, was that the vote was very close, and the reliability of the categorization was not quite enough to easily handle such a close vote.