A long time ago, as a teenager, I spent time reading about science. One of the things that I noticed, was that the Newtonians put some effort into giving as a unified system of weights and measures. That is to say, they established standards for the measurements that scientists make, and the attempted to unify standards internationally.
That always seemed important to me. I took it to be part of how science works, especially so when we notice that measurement is very important to scientists.
When I look at books in the philosophy of science, I do not recall ever seeing the authors mention this standardization of measurement systems. Perhaps the philosophers of science do not see it as important.
Artificial intelligence
One of the things that AI researchers have been concerned about, is learning. And one of their theories of learning has been based on the physical symbol system hypothesis. The idea seems to be that the world is full of naturally occurring physical symbols, and an AI system can pick them up and compute with them. I don’t think there are any naturally occurring physical symbols. It seems to me that symbols are human constructs, and we depend on our own standards on how to use those symbols. So there’s that word “standards” again.
The idea, for that AI hypothesis, was that the symbols constitute information and an information processing system can use them as the basis for artificial intelligence. But I say “no information without standards.”
The problem
Imagine a young child who has learned the word “doggy”. As he walks away, a gust of wind rustles his hair. Oh, another doggy? If the child has no standard as to what constitutes a doggy, then he cannot tell that the gust of wind isn’t a doggy. In order to make sense of the world, we need to make distinctions. And we apply some kind of criteria when making those distinctions. How we apply our criteria are, in effect, our standards for perceiving the world.