Random confusion

by Neil Rickert

In a recent post “Coordinated Complexity — the key to refuting postdiction and single target objections” at the Uncommon Descent blog, scordova attempts to address some of the objections to the probabilistic arguments used by ID proponents.  He gives an example of the kind of objection that he will address:

The opponents of ID argue something along the lines: “take a deck of cards, randomly shuffle it, the probability of any given sequence occurring is 1 out of 52 factorial or about 8×10^67 — Improbable things happen all the time, it doesn’t imply intelligent design.”

Unfortunately, that post at UD fails to answer the criticism and only further illustrates the confusion that it so common in ID thinking.

They start by pointing to a real life example of cheating at the card table:

Ah, but what if cards dealt from one random shuffle are repeated by another shuffle, would you suspect Intelligent Design? A case involving this is reported in the FBI website: House of Cards

This is presented as an example of “an intelligently designed shuffle”.  So, how did the FBI use probabilities to detect this “intelligently designed shuffle?”

The evidence of cheating was confirmed by videotape surveillance

Yes, that’s right.  The FBI found the actual “intelligent designer” and videotaped the design procedure in action.  The statistical evidence might have been enough to suggest that something might be amiss.  But when it comes to proving intelligent design, something more than suspicion is required.

For the sake of discussion, lets suppose that there was no videotaping, and that the use of statistics was the only means available.  How might the FBI proceed?  Well, they could start by designing a statistical experiment.  They could begin with the assumption that the blackjack game and dealing was all done fairly, in accordance with normal casino procedures.  That would give them a well defined sample space, the set of all possible card deals from a newly shuffled deck at the blackjack table.  And they could assume that all possible sequences of cards in the shuffled deck were equally probable, since that is what a fair shuffle is supposed to achieve.  Then, using knowledge about blackjack and about expected probabilities, they could predict about how much the alleged cheaters would win.  If there actual winnings substantially exceeded expectations, they could compute the probability of this happening by chance alone.

With that kind of evidence, they might be able to persuade a jury.  However, the defense attorney would surely argue that it still could be by chance.  And technically he would be correct, though it the chance is small enough he might not have a persuasive case.

To even make this kind of statistical case, the FBI had to know who were the supposed intelligent designers (the accused cheaters), since it would be their odds of winning that would be part of the data in this statistical experiment.

The post at UD now tries to show how this kind of reasoning can be applied to ID:

For example a given password of 10 letters will have an improbability of 1 out of 26^10. If we found a random string of letters (like say scrabble pieces) lying in a box, it might be rather pointless to use probability to argue the pattern is designed merely because its improbability is 1 out of 26^10, however if we found a computer system protected by a login-password that consists of 10 letters, the improbability of that system existing in the first place is at least 1 out of 26^10 and actually far more remote since the system that would implement the password protection is substantially more complex than the password itself (and this is an understatement).

I am having great difficulty trying to understand what it is talking about when it comments on the improbability for a computer system.  What is the sample space?  What is the experimental model for this question?  Is the sample space all possible things that might be called computers by future generations of humans?  How can we possibly work out probabilities?  What are we testing?  In the example FBI case, it was known why the intelligent designers were, and that knowledge was part of the computation of expectations and probabilities.  But in this 26^10 argument, no actual intelligent designer is identified, and there is no obvious way I can see for designing a valid statistical experiment.

There are a number of comments to this UD post, some of them congratulating the post author for debunking a strawman argument by critics of ID.

Sorry, UD, but that is not at all what happened.  A better assessment would be that the ID proponents at UD have put themselves on record as being quite clueless about statistical reasoning, and as being altogether too willing to jump to dubious conclusions.

2 Comments to “Random confusion”

  1. My favorite bit of creationist information silliness occurred when Dembski, not content with ordinary bogus probability calculations, made remarks that implied that the fitness function that might create the phrase “METHINKS IT IS LIKE A WEASEL” by an evolutionary algorithm would have more than 10^{40} bits of information in it – despite the fact that the phrase itself has 28 characters.

    Like

%d bloggers like this: