On natural abiogenesis – give that man a PhD

by Neil Rickert

We all know what BS stands for.  MS means “more of the same”, and PhD means “piled higher and deeper.”  (A joke that used to circulate around college campuses).

A post at Uncommon Descent, titled “On the Impossibility of Abiogenesis” purports to prove that natural abiogenesis is impossible.  I shall detail why I see it as piled higher and deeper with nonsense.  The post is by niwrad, and I shall be quoting parts of that post and then commenting on them.

Modern science takes for granted that the naturalistic origin of life, called “abiogenesis” or “chemical evolution” or “pre-biotic evolution” is extremely improbable but not impossible. “Life” here means a single self-reproducing and self-sustaining biological cell.

That’s the very beginning of the post.  And it is already wrong.  No, science does not take that for granted.  Many individual scientists take it for granted, but one should be careful to distinguish between science (as an institution) and the opinions and beliefs of individual scientists.

Science claims that life can arise from inorganic matter through natural processes.

No, science as an institution does not make such a claim.  For that matter, most individual scientists don’t make that claim.  Most will say that the question of abiogenesis is unsolved and not likely to be solved soon.

Principle 01: Nothing comes from nothing or “ex nihilo nihil”.

WTF?  Is this about the origins of life, or is it about the origins of the universe.  Can you at least try to remember which nonsense argument you are using.

Principle 02: “Of causality”, if an effect E entirely comes from a cause C any thing x belonging or referenced to E has a causative counterpart in C.

Back here in the real world, everything affects everything.  Nothing entirely comes from a single cause.  There might be dominant causes, but never single isolated causes.  Biological systems exist in this real world of complex interrelated causes.  I’m inclined to doubt that there would be any biological creature in a world of isolated causes and simplistic causality.

Definition 01: “Symbol”, a thing referencing something else. Examples: (1) a circle drawn on a piece of paper may symbolize the sun; (2) the chemical symbol CGU (the molecular sequence cytosine / guanine / uracil) references arginine amino acid in the genomic language; (3) the word “horse” symbolizes the “Equus ferus caballus”.

Sigh!  There are no such things as symbols.  We can say that X symbolizes Y to agent Z.  But we cannot say that X symbolizes Y.  Symbolic representation is a matter of the intentions of an agent, and is not a purely physical matter.  For sure, we often talk of symbols without mentioning the agent.  But there is always an underlying implicit agreement of the agent or agents to whom the symbol represents something.

Definition 02: “Symbolic processing” is process implying choices of symbols and operations on them. The basic rules of symbolic processing are contingent and arbitrary and as such are not constrained by natural laws.

This should already refute niwrad’s argument.  If symbolic processing is contingent and arbitrary, and not constrained by natural laws, then it exists on in the mind of an agent who uses that kind of description.  Whether or not abiogenesis is possible should be independent of what agents happen to be thinking.

Definition 05: Turing Machine (TM), abstract formalism composed of a finite state machine (FSM) (containing a table of instructions) + devices able to read / write symbols of an alphabet on one or more tapes (memories).

Fair enough.  But let’s remember that Turing machines do not actually exist in the real world, and could not exist.  They are theoretical models for compution.  They are not physical devices.

A Turing Machine is the archetype of computation based on instructions.

No, that’s wrong.  A TM is a theoretical model, not an archetype.  Let’s remember that computation existed long before the TM was defined.  If there can be said to be an archetype for computation, we we find that in the practices of humans doing computation, whether with an abacus or with pencil and paper or with some similar means.

Definition 06: “Physical computer”, a physical implementation of an abstract formalism of computation. It can be mechanical, electronic, chemical… It is an arrangement of atoms (hardware) that works out a computation.

We need to understand that a computer is something that is seen by cognitive agents as working out a computation.  What the computer does is entirely physical, typically involving electrical and other forms of activity.  There are no symbols in the computer, other than on the manufacturer’s label.  We, as agents, see the computer as doing computation.  But it is computation only by virtue of how we choose to interpret the physical activity of the computer.

Principle 03: Formalism > Physicality (F > P) [2], formalism overarches physicality, has existence in reality and determines its physical implementations.

This is surely a high level of nonsense.  Before abiogenesis occurred (whether naturally or otherwise), there was only physicality.  The formalism comes from agents.

A consequence is that implementation has limits directly related to and implied by formalism.

And here we see the author using that high level of nonsense as the basis for his bogus argument.  But we only have to look at history to see the evident mistake.  In particular, there are many instances in science, where the observed behavior of the physical world was not constrained by the formalism.  So scientists invented new formalisms, leading to important advances in science.

The key point is that the impossibility of certain formalisms implies the impossibility of the related physical implementations. Abstractness matters. It drives matter.

Weird!  Totally weird!  If niwrad thinks that, then why is he bothering to present his argument.  Maybe he should just assume that he is a figment of his own imagination.

According to modern science the universe can be considered a system that computes events according to the physical laws. According to Gregory Chaitin “the world is a giant computer”, “a scientific theory is a computer program that calculates the observations”.

Once again, “according to modern science” is just wrong.  Some individual scientists use those kinds of models.  Science, as an institution, does not dictate that we should use particular models.  Moreover, Chaitin is a mathematician rather than a physicist.  He uses an idealized theoretical model because, as a mathematician, he is interested in such models.

Later in the post, niwrad makes some arguments based on Shannon’s notion of information.  Shannon information and Chaitin information are not the same thing.  You cannot legitimately start your argument with one, then switch to the other.

In summary

I think I have made my point.  The argument is full of specious assumptions.  Perhaps this form of argumentation is common in religious apologetics.  However, if the ID proponents want to persuade us that their’s is a scientific program, and is not just “creationism is a cheap tuxedo”, then they need to avoid this kind of nonsense.

18 Comments to “On natural abiogenesis – give that man a PhD”

  1. What is your definition of the word “agent”? It seems you are avoiding most of the crucial details by referring to “agent” and not elaborating on what “agent” means or how we tell what an agent is. Can a computer be an agent?

    Like

    • The point is that niwrad is taking what is observer relative, and treating it as if absolute. And he is doing this while discussing an historical time when there were no observers. Presumably, as a theist, he is assuming a big observer in the sky. But even a theist should not build that assumption into an argument that proposes to demonstrate evidence for that big observer in the sky. It makes the argument circular.

      Can a computer be an agent?

      That’s what AI is supposed to be investigating. So let’s call that an unsettled issue.

      Like

  2. Three YouTube videos EVERY Atheist, with a sense of humor, should watch

    Like

    • I guess I don’t have that kind of sense of humor. They just seemed silly to me.

      I allowed your comment through moderation, anyway. However, it really doesn’t contribute to a discussion of the topic.

      Like

  3. A computer requires an agency so any and all actions trace back to him/ her/ them. Meyer says that in several of his writings, including “Signature in the Cell”.

    Like

  4. “Science claims that life can arise from inorganic matter through natural processes.”

    If anything, many scientists claim that life could POSSIBLY have arose from ORGANIC matter through natural processes.
    I don’t think I’ve seen any theories stating that life evolved from inorganic matter, as all life is organic (carbon based).
    There is evidence that amino acids can be formed with nothing but several gases (including water, methane, nitrogen, etc.) and a form of electric spark or radiation. These things would have been present on earth many years ago (those gases along with lightning (spark) or radiation (sunlight). Once simple amino acids formed, many theorize that others could have developed as well through similar processes until the first proteins and finally the first protocell came to be. DNA would have evolved through similar means with evolution through natural selection as the guiding process allowing the complex refinement to occur — perhaps with low probabilities BUT over several billion years increasing the likelihood of such interactions taking place. As I’ve heard before, if you give monkeys with typewriters enough time, despite probability, they will write coherent plays like those of Shakespear. We are asking for a lot less when it comes to gradual development of DNA and amino acids (in my opinion).

    Like

    • If anything, many scientists claim that life could POSSIBLY have arose from ORGANIC matter through natural processes.

      You are disagreeing with niwrad (the UD poster), on the meaning of “organic”. I’m not sure that’s worth arguing.

      My reading of your comment is that you are expressing your current opinion, but you are not actually making a truth claim. And that’s part of where I see niwrad as going wrong. He is not distinguishing between opinions and truth claims.

      Like

      • “You are disagreeing with niwrad (the UD poster), on the meaning of “organic”. I’m not sure that’s worth arguing.”

        I think it’s important to correct any errors so we can better evaluate the rest of the UD poster’s comments. Organic is by definition “carbon-based”. If this person is incorrect in some of the words they are using it demonstrates that they are unfamiliar with certain terminology and thus may be unfamiliar with other terms. Confusing terms as basic as inorganic vs. organic demonstrates a lack of credibility and a lack of being learned within the topics of chemistry, biology, which are requirements for discussing details of abiogenesis. I know it’s not the most important thing to focus on, but it still matters. It also depends on what they mean by their statement “Science claims that life can arise from inorganic matter through natural processes”. I highly doubt this is the case but if they actually meant that: organic matter (carbon-based) originally came from inorganic matter (non-carbon-based) and thus life came from inorganic matter (in the sense of precedence and causality) — then I may at least agree that their isn’t a misuse of terminology. We could go further and say that all carbon atoms came from less dense elements due to the atomic condensation that occurred after the big bang — whereby every element originally came from Hydrogen (inorganic) and thus all life came from inorganic matter through natural processes — that is, over time inorganic matter (hydrogen) condensed into higher mass elements and eventually carbon as well as all other elements needed for organic life (nitrogen, oxygen, phosphorous, etc.). If the commenter meant this, then I’d say they were correct in some sense. I highly doubt this was their intention, and I don’t need to rationalize for them, but I thought it relevant to bring up this hypothetical inference (of the UD poster) which would make their statement correct. Hammering out simple definitions like organic vs. inorganic helps to keep a better hold on what they mean by their statement, so we can better analyze/criticize what they said. I don’t have any doubts that their intention was mistaken — that is, they probably failed to see their error in confusing inorganic with organic (along with many other concepts I’m sure).

        You are right however regarding the need for that UD poster to separate opinions from truth claims. That poster needs to be more cautious in that regard as well as regarding what you mentioned: the need to differentiate positions held by “science” (i.e. the scientific consensus based on evidence) versus the positions held by individual scientists which may be theoretical in nature, subjective, and thus far from what “science” or “the science” suggests.

        Like

        • I think it’s important to correct any errors so we can better evaluate the rest of the UD poster’s comments. Organic is by definition “carbon-based”.

          I think you are being a bit too fussy there. The post by niwrad was not a technical report, so should be expected to follow common usage rather than technical definitions. The Collins dictionary gives your definition as only its 4th meaning for “organic”. Other online dictionaries seem similar.

          Apart from that point, we pretty much agree.

          Like

          • “The Collins dictionary gives your definition as only its 4th meaning for “organic” ”

            Actually it gives it as the second definition which says a lot. When you look up “inorganic” in the same Collins dictionary, it mentions “…compounds that do not contain carbon” as its second definition as well. I think that understanding and agreeing on definitions is a necessary foundation for any argument. It may seem that I’m being a bit too fussy but I think that from time to time small details that are missed can mean the difference between an argument being sound or being completely ridiculous and ungrounded.
            Anyways as you said, apart from this, we are in agreement.

            ” Can a computer be an agent? ” (comment from Jeffrey)
            “That’s what AI is supposed to be investigating. So let’s call that an unsettled issue.”

            I suppose this depends on how we define “agent”. Does an agent have to be carbon-based, human, etc.? I don’t think so. How do you define agent? Does it just have to make decisions or produce outputs (interact with the environment around it) based on inputs (i.e. like a human does)? If this is the requirement, then I think that a computer could be considered an agent. It all depends on how we define things. Is an agent an entity that can perform an action? I’ll compare a hypothetical robot to a human (which I’m sure we have no problem labeling the human as an “agent”). If I program a robot to search for a battery (based on an imperfect pattern recognition program) when it’s battery gets below a certain level, and then upon finding said battery, connect the battery to itself such that it can continue performing this search function among other functions, wouldn’t this robot be considered an entity with a desire from time to time (the program inclining it to search for batteries when below a certain level analogous to a human seeking food when the brain signals hunger). Wouldn’t the desire (when it arose) cause it to perform a certain behavior (seek batteries) and wouldn’t the robot also have beliefs (it will look for batteries and depending on the pattern recognition result, will believe that it has found batteries or not found them)? I’m just trying to find a common definition for “agent”, as I think that a robot would qualify. I basically think of an agent as “a person or THING that acts or has the power to act” (Dictionary, Random House). I’m interested in hearing your opinion on what an “agent” should be. The AI community certainly may not have settled whether a computer can be considered an agent or not, but we may be able to answer the question ourselves.

            Like

          • How do you define agent?

            It probably doesn’t have an easy definition. It’s a curious word. In ordinary language, an agent is somebody who carries out an action for me on my behalf. But the usage in philosophy is different, and seems to require an agent to carry out actions on its own behalf.

            That’s the issue is AI right there. Does a computer act on its own behalf, or does it merely do what it is programmed to do. There’s a lot of talk about “autonomous agents” in the AI literature, but it seems that when we design robots, we want them to do what we program them to do. That is to say, we want our “autonomous agents” to be neither autonomous nor agents. Or, said differently, we want them to be agents in the ordinary language sense of “agent” but not in the philosophical sense of “agent”.

            For the present, I think it fair to say that whether a computer can be an agent (in the philosophical sense) is still an unsettled question. Many AI researchers think it just a matter of the right programming. The critics of AI believe that something is missing in the AI picture. I tend to agree with the critics.

            Like

          • I think that humans are programmed just as robots or computers are. The difference is how they are programmed, and how complex that programming is. If we say that a robot may not be an agent because we’ve programmed it to do what it does, by that rationale, we would also cease to be an agent because our actions are a result of different levels of programming as well (indoctrination, genetics, causal chain, etc.).

            “Does a computer act on its own behalf, or does it merely do what it is programmed to do.”

            Many would say that we don’t act on our own behalf, and are merely doing what we are programmed to do, so this sounds like a question that needs revision. Perhaps one could ask: “Does a computer act on its own behalf based on “this” type of programming or “our” type of programming?”. Or perhaps they need to better define what is required to be considered an agent. Based on what I’ve read (definitions and other texts), it seems that a robot can meet the requirement of an “agent”.

            Like

          • I think that humans are programmed just as robots or computers are.

            Okay, some people have that view. I disagree, but I cannot prove it wrong, for it is such a vague claim.

            There’s an alternative way of looking at things. We take humans to be cognitive agents. Perhaps we cannot precisely define what that means. But we presumably make that judgment about humans based on our observations of how the act (or behave). When we observe how computers and robots act and behave, we are inclined to deny that they are agents. We are likely to see more evidence of agency in the behavior of dogs than we see in the behavior of robots.

            The problem for AI researchers, is to design robots that we will unhesitatingly accept as demonstrating agency. This goal has not yet been achieved. It is far from certain that it ever will be achieved.

            Like

          • “Okay, some people have that view. I disagree, but I cannot prove it wrong, for it is such a vague claim.”

            Then you may be able to expand on it if you share your definition of “programmed” or “programming”. Then we can see if they fit and also analyze your definition of those words.

            “There’s an alternative way of looking at things. We take humans to be cognitive agents. Perhaps we cannot precisely define what that means.”

            We need to be able to precisely define what that means if we are to use the concept successfully and analyze any claims using the term.

            “When we observe how computers and robots act and behave, we are inclined to deny that they are agents. We are likely to see more evidence of agency in the behavior of dogs than we see in the behavior of robots.”

            Why do you think this is? What if we compare a robot to an ant or some other “less complex” animal? I think we can very easily compare a robot to an ant or some insect. If you think that there are marked differences between them whereby you think one can be considered a cognitive agent, and not the other, then why? We all have reasons for why we claim something.

            “The problem for AI researchers, is to design robots that we will unhesitatingly accept as demonstrating agency”

            I’m still unsure where this hesitancy comes from. If there’s any, it seems to related to passing the “Turing Test”, which is a ridiculous requirement. An agent doesn’t need to pass as human, it only needs to do what an agent is defined to do. In philosophy we define agent as “an entity capable of action”. In AI community, the broad definition describes an agent as an “autonomous entity which observes and acts upon an environment and directs its activity towards achieving goals”. This is the definition given by Russel and Norvig.

            A goal is a “desired result”. To say that a robot can’t be programmed to have a goal or objective of some kind, and also to observe (measure or detect in some way) and act (perform a physical output of some kind) on an environment is an exercise of pure denial. So it seems that the definition needs to be changed such that robots (that are easy to conjure) are no longer within the definition. Or people need to better define their goal as simply having a machine pass the Turing test, which is above and beyond what should be required for an “agent”. If this was the requirement, that would be quite an anthropocentric view in my opinion.

            Like

          • We need to be able to precisely define what that means if we are to use the concept successfully and analyze any claims using the term.

            If the successful use of language depended on having precise definitions, then there would be no such thing as language. Language, including meaning, is prior to any possibility of there being definitions.

            What if we compare a robot to an ant or some other “less complex” animal?

            Some people do think of ants (or at least of worker ants) as being somewhat robotic. I have not yet made up my mind on that.

            An agent doesn’t need to pass as human, it only needs to do what an agent is defined to do.

            But now we are back at the problem that we can’t actually define what we mean by “agent.”

            In philosophy we define agent as “an entity capable of action”.

            We don’t have a good definition of “entity” nor of “action.” So that doesn’t really define “agent”. It just leaves us in a world of circular definitions, where nothing is really defined.

            A goal is a “desired result”.

            As best I can tell, a robot does not actually have desires.

            Like

  5. “If the successful use of language depended on having precise definitions, then there would be no such thing as language. Language, including meaning, is prior to any possibility of there being definitions. ”

    You are talking about the origin of language which has nothing to do with the point I made. What I said was that “we need to be able to precisely define what that means if we are to use the concept successfully AND analyze any claims using the term”.

    If we don’t know how someone is defining a word then we can’t make any judgments or analyze any claims that used the word in question. Complex concepts such as “cognitive agent” ABSOLUTELY require a definition in order to properly understand what someone means by the term. More simple concepts like “human” or “apple” don’t require them (even though they can help) because they are simple enough concepts that we can associate just by pointing to the object and saying the word. Try doing that with “cognitive agent” to get a successful association that excludes irrelevant inferences and it would be impossible in my opinion. That’s the beauty of language. Written definitions weren’t required to initially create language, but once it was created and a plethora of relatively easily associated concepts emerged, more complex concepts could emerge that DO require definitions to understand successfully. Take the word “cognitive agent” for example. This is a word that requires definitions because the meaning is so complex to simply figure out through non-verbal means. We are having an issue even defining the word which tells me that trying to infer what it means without the use of definitions is even more futile.

    “Some people do think of ants (or at least of worker ants) as being somewhat robotic. I have not yet made up my mind on that.”

    Why haven’t you made up your mind on it? What’s holding you back from making a decision here? All you need to do is define what you’re trying to compare, and then compare. You need to first ask yourself what makes a robot “robotic”?

    “But now we are back at the problem that we can’t actually define what we mean by “agent.” ”

    I’m curious how you define it, not others.

    “We don’t have a good definition of “entity” nor of “action.” So that doesn’t really define “agent”. It just leaves us in a world of circular definitions, where nothing is really defined. ”

    We can define “entity” as a being, object, device, etc. We can define “action” as doing something to achieve an aim. If you disagree with these definitions, then what are YOUR definitions of these words. That’s all I’m interested in here.

    “As best I can tell, a robot does not actually have desires.”

    Why not? How do you define “desire”? A desire is basically a goal, objective, impulse to do something, etc. A robot is programmed to have an objective/impulse to do something. If you disagree then you have to have a definition of desire that differs from the common one. What might that be if yours differs? You said “as best I can tell”, so you must be using some definitions to make that claim. How do you define “desire”?

    Like

    • We can define “entity” as a being, object, device, etc.

      You would first need to define “being”, “object”, “device”. The circularity never ends. Language has to work without reliance on definitions.

      Like

      • “You would first need to define “being”, “object”, “device”. The circularity never ends.”

        I would say anything that can be programmed is a being, object or device. We could further refine this definition and say that it should have some kind of a data processor (brain, CPU, etc.).

        “Language has to work without reliance on definitions.”

        Simple language including that which originated does work without reliance on definitions, but complex language requires them for clarity and simultaneously understanding a large number of relatively simple concepts.

        Like