Generalization in science

by Neil Rickert

According to most treatments of philosophy of science, or at least most of those that I have looked at, science advances by means of inductive generalizations. Inductive generalizations are often assumed to be the basis for scientific laws (such as laws of physics).

To me, that seems wrong.  I do not see the evidence that science is using induction.

I can agree that there are generalizations in science.  But it does not seem to me that they are inductive generalizations.

Induction

First an example of induction, to illustrate what is meant by the term.

All the many crows that I have seen are black.  Therefore all crows are black.

This example is similar to typical examples from philosophy of science.  Doubtless, it is intentionally oversimplified and exaggerated, so as to more readily communicate the idea.  So we should not assume that this is intended as a literal example of how science is said to work.

I should mention that there is a method called “Baconian induction“, which is different from philosophical induction, and perhaps closer to what science actually uses.  But Baconian induction is rarely mentioned by philosophers of science.

Induction has long been subject to skeptical criticism.  The main skeptical objection, is that induction is not logically valid.  However, others have suggested that induction is still rational, even if not logically valid.

It seems to me that it would require miracles for induction to be the basis of scientific laws.  Put crudely, induction would require that we make many observations (say, thousands).  And then we compute an average (perhaps an average trend line).  We then throw away all of the observations, and keep only that trend line, which we call a law.  And, somehow, this is supposed to be far more useful than the original observations.  It doesn’t make sense.

Predictions

Here’s an alternative that seems more plausible than induction.

All the many crows that I have seen are black.  Therefore, I predict that the next crow that I see will be black.

This seems more plausible for several reasons.  For one thing, it does not make a general claim (“all crows are black”).  Instead, it limits itself to a specific claim (“the next crow that I see will be black”).  Moreover, we don’t expect predictions to be perfect.  If our predictions are better than pure guesses, then they already have some use to us.  So a prediction is weaker than a claim of truth.  And it doesn’t surprise us if we get some of our predictions wrong.

Perhaps if we worded induction in terms of predictions, that would make more sense.  I’ll look at examples of that later in this post.

Newton’s laws

How did we get to Newton’s laws of motion?  Some would have us believe that they result from induction.  I can agree that they are some kind of generalization.  But, to me, they look more like the kind of generalization that is used in mathematics, as discussed in my previous post.  And if that’s the kind of generalization that was used, then it isn’t at all like induction.

At the time of Newton, the concept of force would mainly have been with the force required to raise heavy objects, or the force from using levers.  Newton’s f=ma was pretty much the description of how weighing machine work, with the acceleration a being the acceleration due to gravity and the force f being the lifting force needed to oppose that acceleration.  What Newton’s laws achieved, was to generalize the concept of force, so that we could then talk of, for example, the force of friction and the force of air resistance.  And not only could we talk of them, but f=ma gave us a way to measure those forces, as long as we could measure the acceleration.  It seems to me that Newton’s laws were effective, mainly because they increased what we could measure and thus increased the amount of data available to use in making predictions.

Conductivity of copper

Here’s an example that looks closer to the traditional description of induction.  Scientists measure the electrical conductivity of a few sample of copper.  And then they publish this in a reference table that can be widely consulted.  So it looks as if they took a few measurements, and generalized from that.

That seems fair enough.  And you can call that “induction” if you like.  But why does it work?  The reason is because copper is known to be a very homogeneous material, so that the conductivity should be the same for all samples.

The real work here isn’t in the induction.  Rather, it is in the purification processes that give us a very homogeneous copper.  And part of that is in the way that the scientists carve up the world into different objects and different materials.  The highly systematic way in which scientists carve up and organize the world is a very important in why science works as well as it does.

For that matter, the way we organize the world has a lot to do with other proposed inductions.  Suppose somebody argued:

All the many birds that I have seen are black.  Therefore all birds are black.

Such an argument would be seen as absurd.  But when specified for crows, it seems more plausible.  And, again, the difference is that crows are a fairly homogeneous group.  Note, though, that a similar argument about parrots would seem absurd, because the varied coloring is a characteristic of parrots.

Kepler’s laws

Kepler did use data on observations of planets.  And he apparently plotted those and looked for a trend line.  You do not get an exact ellipse by joining plotted points in a graph.  It is likely that Kepler picked the ellipse for its mathematical simplicity and because its shape seemed roughly similar to the planetary plots.  And then he fitted ellipses to the plotted data.  As I have posted previously, Kepler’s laws are false.  And it must have been obvious to Kepler that they are false.  That is to say, it must have been obvious that his plotted points did not exactly fit on the ellipses.  To me, it seems likely that Kepler was mainly concerned about the ability to predict.  If he could come up with curves that fitted well, if imperfectly, and that were mathematically tractable, then he could use those curves to make reasonably accurate predictions of planetary motion.

As it happened, Newton was able to derive Kepler’s laws in his mathematical solution for the two-body problem.  So Kepler’s choice of the ellipse was particularly fortuitous.

Boyle’s law, Ohm’s law

It seems almost certain that Boyle knew his law was false, though a very useful approximation.  And I am inclined to think that Ohm probably knew that his law was false, though a good approximation.  In both cases, we have laws that are useful for making good approximations, though not strictly true.  At least, as typically described by philosophers, induction is supposed to yield truth.  I suggest it would be better to describe these laws as pragmatic conventions which have proved of value in making predictions.

Summary

Scientific theories and scientific laws are varied.  They cannot all be described in the same way.  It seems to me that induction is never a good description of what science does.  I’ve given some examples of laws that I would describe as pragmatic conventions that have been shown to make useful predictions.  And some laws and theories appear to be more like the kind of generalization that we see in mathematics.  I’ve mentioned Newton’s laws as an example of this.  Einstein’s general theory of relativity is another example.

23 Comments to “Generalization in science”

  1. I would say that Bayesian reasoning is the basis for science which uses a combination of induction and prediction in comparing two (or more) hypotheses. Basically, Bayesian reasoning is implicit in any valid reasoning, so whenever scientific reasoning is logically valid, in can be represented in Bayesian terms. Then we can put our confidence levels for any particular background or evidentiary claim in explicit statistical/probabilistic form. I would say this is the best way to describe the basis of scientific laws and so forth, because it maintains some of the inductive character that people refer to when they are forming generalizations through accumulated data, but it does so based on its conjunction with background information as well as specific evidence for (or against) the claim in question. Good stuff!

    Like

    • I would say that Bayesian reasoning is the basis for science

      This is almost certainly wrong.

      Basically, Bayesian reasoning is implicit in any valid reasoning,

      So much the worse for valid reasoning.

      I expect that most reasoning is neither valid nor invalid.

      Inductive reasoning is usually considered to be logically invalid. Yet you seem to be saying that it is implicit in valid reasoning.

      Like

      • “This is almost certainly wrong.”

        I’d be curious to hear your argument to support this claim. Bayesian reasoning is used implicitly in all sound reasoning (it is simply a mathematical model of such) and therefore it’s employed in any logically valid reasoning that provides successful predictions. Successful/unsuccessful predictions and hypothesis reformulation are the backbone of science and therefore Bayesian reasoning is a basis for it.

        “I expect that most reasoning is neither valid nor invalid.”

        Now THAT’S almost certainly wrong. Reasoning is either logically valid or invalid. Unless you are not referring to logical validity, but some other kind of validity, in which case I’d have to reconsider given that other type of validity.

        “Inductive reasoning is usually considered to be logically invalid. Yet you seem to be saying that it is implicit in valid reasoning.”

        All that means though is that you can’t arrive at a logically valid conclusion from inductive reasoning alone, however any deductive argument requires premises which are only arrived at through induction (unless the premises are true by definition which is often not the case), and deductive arguments ARE considered to be logically valid. Ergo, induction is needed as well as deduction in order to arrive at a deductive argument and the conclusion that follows (at least, those that don’t have tautological premises). This is one reason why I said that induction was implicit in valid reasoning, because it is implicit in (most) deductive premises, with deductive conclusions being arrived at through logically valid reasoning. The other reason why I mentioned that induction was implicit in valid reasoning is because Bayesian reasoning requires prior probabilities for background information and consequent probabilities for specific evidence for a hypothesis/claim (h) and for it’s negation (-h), and these probabilities are going to be arrived at through frequency statistics (e.g. “how often is this hypothesis true in other cases?”, “how likely is it that the evidence would look in any way different if ‘h’ is true?”, etc. This latter task involves calculating the probability of a claim being true and induction seems to do the same thing (arriving at a generalization claim based on probability).

        As an example: All men are mortal. Socrates is a man. Therefore, Socrates is mortal.

        This is a deductive argument but the premises are only known through induction. Unless we are defining “men” as necessarily being “mortal” and making this premise a tautology, we are determining its soundness through experience, having never seen a man that is IMmortal before. In this case we would use Laplace’s Law of Succession (s+1/n+2), where s is a confirmatory observation and n the total number of observations, to determine the actual prior probability of the next man we encounter being a mortal as well. Likewise for premise two, where we can only know what a “man” is by having observed many instances of men and noticing that Socrates’ attributes do in fact fit the bill. Bayesian reasoning is basically required for sound probabilistic reasoning, and since probabilistic reasoning is used in induction (and therefore in all non-tautologically premised deductions), then Bayesian reasoning is needed for all sound deductions as well, and only those that are sound are likely to provide predictive benefits from their conclusions. Let me know if you want to delve into the details a bit more or if you are more interested in a different aspect of this topic.

        Like

      • I’d be curious to hear your argument to support this claim.

        You have not presented any argument in support of your claim. You have only made bare and unsupported assertions.

        Bayesian reasoning is used implicitly in all sound reasoning (it is simply a mathematical model of such) and therefore it’s employed in any logically valid reasoning that provides successful predictions.

        So you say. But you provided no support for that assertion. As best I can tell, it is either a trivial truism of no practical importance, or it is false — depending on what you mean by “used implicitly”.

        I’ll add that there is a huge difference between “Bayesian reasoning is a mathematical model of sound reasoning” and “Bayesian reasoning is used implicitly in all sound reasoning.”

        Reasoning is either logically valid or invalid.

        Most reasoning is not logical reasoning.

        “Shall I reply to this blog post now, or shall I wait until after dinner” isn’t the kind of question that involves logic. Yet people do reason about such questions.

        … however any deductive argument requires premises which are only arrived at through induction

        That’s absurd.

        Like

        • “You have not presented any argument in support of your claim. You have only made bare and unsupported assertions.”

          I kind of thought that this would be your response to “support your claim”. Try again.

          “So you say. But you provided no support for that assertion.”

          Here’s a couple links for support, to get you started:
          https://plato.stanford.edu/entries/bayes-theorem/
          https://www.richardcarrier.info/archives/12742

          “I’ll add that there is a huge difference between “Bayesian reasoning is a mathematical model of sound reasoning” and “Bayesian reasoning is used implicitly in all sound reasoning.” ”

          What I mean by “used implicitly” is that any sound reasoning can be shown to be so by being successfully represented in Bayesian form, which means that people are using Bayesian reasoning whenever they are employing sound reasoning, even if they are not using the formulae explicitly. It’s all a matter of priors, consequents, and likelihood ratios — that’s all there is to it.

          “Most reasoning is not logical reasoning.”

          Well most of it involves some amount of logic (including an implicit use of the logical absolutes), even if the reasoning is most certainly not EXCLUSIVELY logical. A lot of it is as you say not logical. However, a lot of it is logical abduction or “inference to the best explanation” (I think this is quite common), along with induction and Bayesian reasoning which provides the logical/mathematical structure to the assigned probabilities of claims one is reasoning about. When people are knee-deep in cognitive biases, they are not employing Bayesian reasoning but when they are reasoning soundly, the epistemic probabilities will conform to a Bayesian form. A good heuristic for determining the presence of logical reasoning is one’s ability to use their conclusions to derive successful predictions. If this happens often, it is likely that logical reasoning is being employed.

          ” ‘Shall I reply to this blog post now, or shall I wait until after dinner’ isn’t the kind of question that involves logic. Yet people do reason about such questions.”

          I would argue that decision-making (as per your example) still involves logic, if only implicitly. To see this, think about what information is being used in the reasoning process to make this decision. In this example, one has to determine why they ought to choose one over the other and that is going to depend on hypothetical imperatives (“If, then” statements) which will take on a logical form, e.g., “If I want to avoid being hungry before replying to this blog post, then I ought to wait until after dinner to reply to it,” along with a number of other supporting assumptions that break down to a logical structure. For example, “Eating causes one to no longer be hungry. Having dinner is a form of eating. Therefore, having dinner will cause me to no longer be hungry.” Or, “Hunger diminishes cognitive capacity for replying to blog posts”, “I want my replies to not be limited by diminished cognitive capacity”, “therefore, I ought to eat before replying to blog posts.”

          I believe that you are simply taking for granted all the logical operations that are occurring in most reasoning because most of the time it is happening rather automatically with “logic” not explicit in one’s consciousness during said reasoning. But this doesn’t negate that logical reasoning is being employed a lot of the time, but rather only that one may not be employing it explicitly. On the other hand, if one is trying to make an argument or proof in a transparent form with more rigor, then one will explicitly use logical reasoning and perhaps write it down for others to see. The latter is explicit logical reasoning and the more common forms of reasoning involve (though not exclusively/exhaustively) an implicit use of logical reasoning.

          Like

          • https://plato.stanford.edu/entries/bayes-theorem/

            Bayes’ theorem is an important an useful theorem in probability theory. Bayesian philosophy is bullshit.

            From that SEP article: “Subjectivists think of learning as a process of belief revision in which a “prior” subjective probability P is replaced by a “posterior” probability Q that incorporates newly acquired information.”

            It ought to be obvious that learning is not belief revision. A newborn child starts with no beliefs, or very few beliefs. What beliefs is that child revising, as he learn stuff?

            https://www.richardcarrier.info/archives/12742

            Given that I’m a mathematician, I think I know more about Bayes’ theorem than Richard Carrier. People who work in the same area as Carrier are very critical of his work. His own use of Bayes’ theorem does not seem to have made for a persuasive argument.

            Like

  2. As for the claim you responded to “… however any deductive argument requires premises which are only arrived at through induction”, said “That’s absurd.”

    Can you explain, very specifically, why you think that’s absurd?

    Like

    • Can you explain, very specifically, why you think that’s absurd?

      Many premises can be derived from simple observation, without needing anything that look at all like induction.

      In any case, consider your own example “All men are mortal.”

      That sort of argument might be introduced to a teenager. So let’s look at the possible induction by a teenager:

      All the many men that I have known are still alive.
      Therefore (by induction), all men are immortal.

      A teenager does not get “all men are mortal” by induction from his own experience. He probably learns it as part of the conventional wisdom that he picks up from his culture.

      Like

      • “Many premises can be derived from simple observation, without needing anything that look at all like induction.”

        Such as..?

        “All the many men that I have known are still alive. Therefore (by induction), all men are immortal.”

        This is incorrect. The concept of immortality doesn’t follow from being “still alive”. It is defined as “being unable to die”. Therefore to use the same example for the teenager, “All the many men that I have known are able to die (as best as I can tell from experience of what “can kill a person”). Therefore (by induction), all men are mortal.”

        “A teenager does not get “all men are mortal” by induction from his own experience. He probably learns it as part of the conventional wisdom that he picks up from his culture.”

        I disagree with this, but only in part. It is true that we learn many facts by simply being told that they are true, but in order for them to be plausible to us, that plausibility and trustworthiness of educational authority comes from induction. For example, if he’s not learning that “all men are mortal” directly from induction, then he’s likely confirming the plausibility of “all men are mortal” from it, but noting that none of the men he’s met so far seem to be “unable to die”, which further supports the claim that he has been taught. Likewise, he’s learning to trust conventional wisdom and teachings based on their enhancing his ability to successfully achieve goals. For example, “all the things I’ve tried to verify from my teacher’s claims turn out to be true, therefore (by induction), all the teachings that will come from my teacher(s) are true.

        This goes for any culturally transmitted information. In order to believe it, I have to induce that whatever I’m being taught is in fact true for some reason or other. In reality, we don’t do this blindly or without any criteria whatsoever but rather we tend to apply the “extraordinary claims require extraordinary evidence” and/or “extraordinary claims require extraordinary expertise/authority to back them up” and other less stringent criteria for more mundane claims. If I consider my teacher to be a good authority based on their being correct a lot whenever their claims are fact-checked, then I can begin to use induction in thinking that “any claim from my teacher is true” (as a generalization). Then when they tell me that “all men are mortal”, I’m still using induction to believe it’s veracity by extension of the induction I used to trust the source of the information in the first place. I think you overlooked (or took for granted) the “induction by extension” that takes place here, and that may explain your disagreement here.

        Like

        • This is incorrect. The concept of immortality doesn’t follow from being “still alive”.

          That doesn’t matter. The important point is that “all me are mortal” does not come from induction. It comes from the culture.

          It is true that we learn many facts by simply being told that they are true, but in order for them to be plausible to us, that plausibility and trustworthiness of educational authority comes from induction.

          That’s surely wrong. You are wanting to use induction for justification of belief. But I’ve never seen that proposed. Induction is supposed to be the source of new beliefs. Maybe the word you are looking for is “corroboration” rather than “induction.”

          Like

  3. I did not mean to imply that “induction by extension” is a technical term currently used in philosophy. I’m using it because it’s useful to describe how induction is used even in your example, where it is used “by extension” (indirectly), with the other reasoning that is needed to arrive at the belief that “all men are mortal”. You seem to be implying that no reasoning is used to adopt beliefs that are transmitted culturally. That is mistaken. Culturally transmitted knowledge or beliefs are generally only accepted by the person receiving such information after reasoning about either the claim itself (direct induction, abduction, etc.), or the source of the claim (indirect induction or what I called “induction by extension”). Is my position more clarified now?

    Like

    • I’m using it because it’s useful to describe how induction is used even in your example, where it is used “by extension” (indirectly), with the other reasoning that is needed to arrive at the belief that “all men are mortal”.

      But it isn’t really induction. It would be better to call it “corroboration”.

      You seem to be implying that no reasoning is used to adopt beliefs that are transmitted culturally.

      No, I haven’t suggested that at all. However, whatever reasoning is used, it isn’t logic and it isn’t induction. I would call it “pragmatic judgment”.

      Like

  4. ” You are wanting to use induction for justification of belief. But I’ve never seen that proposed. Induction is supposed to be the source of new beliefs.”

    Sort of. If the source of the new belief is based on induction pertaining to the source itself (i.e. to resolve the question: Why should I believe what is being culturally transmitted to me as factual?), then one can ALSO use induction about the claim itself to further corroborate the likely truth status of the claim. Then one is simply using induction twice, though one need not do so. If they don’t perform direct induction to arrive at “all men are mortal”, then I’m arguing that they are using (or have already used, long before) indirect induction (induction by extension, as I’ve phrased it) by forming a belief about the accuracy of the source of such claims. Reasoning must be going on in one way or another and I think that “induction by extension” is a common means of doing so, and it allows one to avoid having to use induction directly to arrive at all sorts of beliefs about the world that a person eventually holds. Instead we can just trust the source (encyclopedia, teachers, scientists, experts, other sources in our culture especially for more mundane claims, etc.) because we’ve become confident in its veracity based on induction [e.g. “This source has been true time and time again, therefore (I can assume) that this source is always correct.”]. The “I can assume” part of the previous statement is often maintained for pragmatic purposes, so even if I know that I can’t KNOW that the source is always correct, I can assume so. This is why I think it’s most accurate to describe our reasoning in Bayesian terms because then the actual probabilistic nature of the epistemic status of a claim is brought to light.

    Like

    • Why should I believe what is being culturally transmitted to me as factual?), then one can ALSO use induction about the claim itself to further corroborate the likely truth status of the claim.

      But it isn’t really induction. You are not making a generalization. The generalization has already been made, and you are only deciding to what degree you will accept it.

      This is why I think it’s most accurate to describe our reasoning in Bayesian terms because then the actual probabilistic nature of the epistemic status of a claim is brought to light.

      Bayes’ rule can be a basis for changing your degree of belief in an already existing proposition. But science comes up with new propositions that have never before been expressed. Bayes’ rule won’t help with that.

      Like

  5. “But it isn’t really induction. You are not making a generalization. ”

    Ah, so you don’t think that the process of observing a number of factual claims coming from a source and then reasoning that therefore all of their claims are factual (and should be trusted) is a generalization, a form of inductive reasoning? I think so, but if you don’t then you and I will disagree on what a generalization is.

    “Bayes’ rule can be a basis for changing your degree of belief in an already existing proposition. But science comes up with new propositions that have never before been expressed. Bayes’ rule won’t help with that.”

    I disagree. While Bayes’ Theorem can be used to update your belief about an already existing proposition (given new evidence), new propositions can also be analyzed in the same way by calculating their prior likelihood based on background information and the specific evidence on offer to support the new claim. In order for new propositions in science to enter one’s set of beliefs as true (or likely true), sound reasoning entails that one has to weigh the priors and the consequent probabilities just as with any other proposition. These probabilities will ultimately be based on observational data, predictions, etc. to support the new scientific claim and their compatibility/conjunction with already well-established facts in physics, chemistry, biology, etc. It could be that based on background information, one’s priors for a new scientific claim/proposition are 50/50 (H/-H), in which case weighing the evidence specific to the claim is all that determines the final posterior result, but it’s still Bayesian reasoning that’s determining the likelihood ratio.

    This isn’t to say that there won’t also be some difficulty in expressing the new scientific proposition clearly and effectively, especially if it involves new concepts that are hard to map onto existing concepts (though most scientific claims don’t fall into this paradigm-shift/incommensurable category), but it has to be done eventually in order to evaluate its truth status (in particular, to evaluate its likelihood ratio, or probability of being true). Now I will say that while Bayesian inference is used to update our beliefs pertaining to old or new claims, priors are often overlooked in a lot of scientific inquiry (or they’re simply set to 50/50, for and against), where new evidence is the key element guiding the likelihood determination.

    Like

    • Ah, so you don’t think that the process of observing a number of factual claims coming from a source and then reasoning that therefore all of their claims are factual (and should be trusted) is a generalization, a form of inductive reasoning?

      That seems more like an act of extreme folly.

      While Bayes’ Theorem can be used to update your belief about an already existing proposition (given new evidence), new propositions can also be analyzed in the same way by calculating their prior likelihood based on background information and the specific evidence on offer to support the new claim.

      Then you are not really using Bayes’ theorem, except as a feeble excuse for conclusions arrived at in some other way.

      Like

      • “That seems more like an act of extreme folly.”

        You may be arguing that making that generalization is an act of extreme folly, but that doesn’t negate it’s being an act of induction.

        “Then you are not really using Bayes’ theorem, except as a feeble excuse for conclusions arrived at in some other way.”

        Actually one is using it in that case. This is because priors and consequent probabilities are applied to arrive at a posterior probability in a Bayesian form. That’s all that is needed to “use Bayes’ Theorem”.

        Like

  6. “Bayes’ theorem is an important an useful theorem in probability theory. Bayesian philosophy is bullshit.”

    I’m not sure what you mean by this claim. Can you explain and provide supporting evidence for your claim here?

    “It ought to be obvious that learning is not belief revision. A newborn child starts with no beliefs, or very few beliefs. What beliefs is that child revising, as he learn stuff?”

    Of course it is. You start out with evolutionarily ingrained priors (otherwise randomly arrived at priors if there are no innate priors) about incoming sensory information in order to build some kind of cognitive model of the world. As more data come in, prediction error revises beliefs/models about said world as much as is possible given neurological constraints and selection mechanisms. Then over time, this model enhancement/change is ultimately belief enhancement/change. It no doubt starts out with very fundamental beliefs (few of them as you say, which I agree with you on), with these beliefs concerning things like object permanence, space, time, causality, etc., and then more complex beliefs begin to build as prediction error is further reduced. Eventually when a large foundation of beliefs has been built up, one can begin to refine their beliefs in very specific ways based on new evidence, whereas before this foundation is built, fundamental beliefs themselves are changing/forming.

    “Given that I’m a mathematician, I think I know more about Bayes’ theorem than Richard Carrier. People who work in the same area as Carrier are very critical of his work. His own use of Bayes’ theorem does not seem to have made for a persuasive argument.”

    Prove it. Rather than ad hominem the man, find an actual flaw in his reasoning/arguments and show me what’s wrong with it. His book “Proving History” which contains his “Bayesian core” (so to speak) was peer-reviewed by mathematicians among others, so your critique is not very persuasive and your credentials are irrelevant given that peer-review status (and since credentials can never replace an actual rebuttal to an argument). If you can rebut an argument he’s made, I’ll gladly hear you out, but to use “Hitchen’s razor”, “what can be asserted without evidence can be dismissed without evidence.” Credentials are not evidence for or against the validity or soundness of an argument.

    Like

    • You start out with evolutionarily ingrained priors (otherwise randomly arrived at priors if there are no innate priors) about incoming sensory information in order to build some kind of cognitive model of the world. As more data come in, prediction error revises beliefs/models about said world as much as is possible given neurological constraints and selection mechanisms.

      A nice piece of creative fiction.

      I guess you are unable to see why it is absurd. So we should probably just agree to disagree.

      Like

  7. You are confusing algorithms with reality. Reality is not an algorithm, but algorithms work reasonably well to describe reality as long as you limit the parameters of the description. You must apply an algorithm across the appropriate scale for it to be consistent. Newtonian gravity works just fine on one scale but is totally inadequate at much smaller and much larger contexts.

    Like