In an earlier post, I hinted that I would discuss the two essays by Massimo Pigliucci on naturalized metaphysics. So that will be the goal of this post. For convenience, I shall refer to those two essays as NM1 and NM2.
- NM1: Surprise! Naturalistic metaphysics undermines naive determinism, part I
- NM2: Surprise! Naturalistic metaphysics undermines naive determinism, part II
It is not my aim here to argue that Pigliucci is wrong. Rather, the aim is to present how I look at the questions he is discussing. Partly, this is because I have rather non-typical views, and am sometimes asked to explain them. Partly, it is because I have indicated my dislike for metaphysics, and some have suggested that we cannot actually do without metaphysics. So perhaps the discussion here will help my readers better understand my viewpoint.
Realism and anti-realism
The first issue raised in NM1 that I want to discuss, is that of realism vs. anti-realism.
To put it very briefly, a realist is someone who thinks that scientific theories aim at describing the world as it is (of course, within the limits of human epistemic access to reality), while an anti-realist is someone who takes scientific theories to aim at empirical adequacy, not truth.
With that as the explanation, I fail to see any important difference between realism and anti-realism. They seem to be two different names for the same thing.
Jason Rosenhouse makes a similar point in his post about NM1. So perhaps this is an example of where the way philosophers think about science is different from the way that scientists think about science.
Perhaps the important difference here, is that I see scientific theories as neither true nor false. And that is because I do not see the theories as descriptions of the world. Rather, I see a theory as a kind of philosophical viewpoint that guides research. For example, I have always taken Newton’s as a definition, rather than as a description. It defines how we shall go about determining (or measuring) force. When we use the theory to make measurements or other observations, then those observations should be seen as descriptions. But we do not need the theory itself to be a description, in order for the observations made under that theory to be descriptions.
A little later, we read:
The best argument in favor of scientific realism is known as the “no miracles” argument, according to which it would be nothing short of miraculous if scientific theories did not track the world as it actually is, however imperfectly, and still managed to return such impressive payoffs, like, you know, the ability to actually send a space probe to Mars.
Here, I wonder what is the intended meaning of “as it actually is”. On its face, “as it actually is” would appear to be referring to some description or specification. However, a specification requires a specification language, and a description requires a description language. I keep wondering what those languages could be. As best I can tell, there is no canonical specification language that we can discover. The language of science is the very best language that we have available to us for making specifications and descriptions. So how we describe the world with science should be as close as we can get to describing the world “as it actually is.”
Underdetermination of theories
Perhaps the two best arguments in favor of anti-realism are the underdetermination of theory by the data and the pessimistic meta-induction.
That’s a really puzzling thing to say. Very often, the data is theory laden. And that means that the data does not even exist before there is a theory. Pigliucci even seems to acknowledge that theory ladenness in a comment to NM1:
Yes, we do learn things from observation (and experiment), but observations themselves are theory-dependent, not just vice versa, which complicates the picture a bit…
Pigliucci goes on the explain his underdetermination with:
Perhaps the best way to picture this is to plot some points on a standard X-Y axis and then fit a curve to them. If you think of the points as data and of the curve as the theory explaining them, you will immediately realize that there is literally an infinite number of curves that can equally well fit the data: the points under-determine the curve.
Very few scientific theories arise that way. One could perhaps say that is a reasonable account of how we got Kepler’s laws. But you cannot explain Newton or Einstein that way.
I’ll grant that fitting curves to data is a common practice. But it is usually seen as a way of making empirical predictions, rather than as the basis for theory formation. I also grant that the resulting curve is underdetermined by the data in such cases. However, the quality of predictions made will usually be about the same, even with different choices of fitted curve. So I don’t see this as any basis for the realist/anti-realist distinction.
This is another of the “best” arguments mentioned. Ptolemy’s astronomy was displaced by that of Copernicus. Newton’s mechanics was displaced by that of Einstein. We see, in many areas of science, an older theory being displaced by a newer one. The meta-induction is supposed to be of the form:
- All older scientific theories have been found wrong.
- Therefore all scientific theories are wrong.
Isaac Asimov argued against this view, as quoted in one of the comments:
Isaac Asimov had something to say about the pessimistic meta-induction: “When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.”
Pigliucci did not like that, and suggested that Asimov was not a good philosopher of science. I’ll side with Asimov on this one. This partly goes back to the view mentioned earlier, that I do not see a scientific theory as either true or false. Rather, I see a theory as a guide to empirical work. We value a theory for its usefulness as a guide. Although, in principle, Einstein’s theory has replaced Newton’s, we still find Newtonian mechanics very useful and a lot easier to use for solving many problem. We prefer Einstein for use with particle accelerators and for use with strong gravitational fields in cosmology. But for most earthbound uses, Newton’s theory is entirely adequate. Most scientists don’t go around saying that Newton’s theory is false.
Why would underdetermination matter
Let’s assume underdetermination, for the sake of discussing the consequences, and ask why that would even matter.
A french tourist visits New York City, and writes a description of NYC in French. A german visitor similarly writes a description in German. These two descriptions will be vastly different, at least as syntactic expressions. Yet we would probably say that both are true. We don’t argue that at least one of those descriptions must be anti-real, since they disagree. Rather, we accept that both can be real descriptions but in different languages.
Perhaps it seems bizarre to analogize a scientific theory with a natural language. I may do a future post on why it is not bizarre at all. In any case, both a theory and a language bring a collection of concepts and a way of presenting descriptions in terms of those concepts. Just as we accept that a French description could be true, even though it is different from a German description, we should also accept that a Newtonian description could be true even though it is different from a relativistic description.
Moving on to NM2, Pigliucci argues for structural realism, spefically for the strong ontic structural realism.
To put it in other words, realists are correct in the broad picture, but they are wrong about what carries from one successful theory to the other: new theories do not (necessarily) retain older theories’ description of unobservables (like ether), but rather their mathematical or “structural” content.
Apparently this depends on assuming mathematical platonism as the metaphysical or ontological basis for mathematics. To me, this is quite puzzling. As best I can tell, as a mathematician, platonism does nothing for me. When asked about my philosophy of mathematics, I usually say that I am a fictionalist. That is to say, I consider mathematical entities (such as numbers) to be useful fictions. In truth, I find it hard to distinguish between fictionalism and platonism.
I remember, some years ago, that I was puzzled about why philosophers concern themselves with ontology of mathematical objects. So I asked a philosopher, and he gave me a reasonably clear explanation. But that left me thinking that philosophers must have really weird ways of looking at mathematics, if the ontology of mathematical objects makes any difference at all.
I’ll stop at this point. My aim, as I indicated at the beginning, was to describe my own views on what I see to be the major issues raised in NM1 and NM2. I think I have done that.