Previous Month | RSS/XML | Current | Next Month

May 29th, 2006 (Permalink)

That Old Black Magic

If you have any doubt that logical fallacies such as post hoc can lead to dangerous and even deadly mistakes, read the following story about cases of bird flu in a family in an Indonesian village:

[M]any people who were never scared of doctors before are now terrified of them. … Indonesian officials reported that at least one patient had fled the hospital to seek traditional medicine and was later caught and returned. In the event H5N1 should mutate into a form easily passed among humans, such behavior would likely spread the illness further[―]a serious worry for experts who fear the possibility of a bird flu pandemic. … [S]ome villagers began associating Tamiflu, the chief drug to treat bird flu, with death because members of the infected family most of whom were given the medicine too late to help were dying after taking the pills.

Source: Margie Mason, "Indonesian Villagers Blame Magic, Not Flu", Associated Press, 5/29/2006

May 28th, 2006 (Permalink)

Shermer on the SHAM Scam

Michael Shermer's "Skeptic" column in this month's issue of Scientific American concerns the "Self-Help and Actualization Movement", or "SHAM" for short. The column is partly based on a new book called Sham: How the Self-Help Movement Made America Helpless by Steve Salerno, and it raises an issue involving appeals to expert opinion which doesn't get enough attention, namely, that in some subjects there are no experts. So, if one appeals to the opinion of such a non-expert, one commits a fallacious appeal to authority. Most, if not all, "motivational speakers" fall into the non-expert category, so why do some people keep going back to them?

According to Salerno, no scientific evidence indicates that any of the countless SHAM techniques―from fire walking to 12-stepping―works better than doing something else or even doing nothing. The law of large numbers means that given the millions of people who have tried SHAMs, inevitably some will improve. As with alternative-medicine nostrums, the body naturally heals itself and whatever the patient was doing to help gets the credit.

Thus, our old friends post hoc and the regression fallacy help to keep the motivational gurus raking in the money.

Source: Michael Shermer, "SHAM Scam", Scientific American, 5/2006

May 21st, 2006 (Permalink)


Tim Chase, a student who is writing a paper on logical fallacies, sends in the following questions:

Q: Do you think that fallacies are harmful enough that they should be censored? What do you think might be the benefits/harm from censoring fallacious argument?

A: It would be very dangerous to give anyone the power to censor argumentation. While one might hope to improve the logical quality of arguments by forbidding fallacious ones, the temptation to use such power to suppress certain views would be difficult for the censor to resist. It is easy for people with strong feelings about an issue to convince themselves that all of the arguments on the other side are fallacious, whereas none on their own side are. This common psychological bias is a type of one-sidedness that would make the power to censor dangerous in the hands of anyone with a passionate commitment to any side of an issue.

Censorship can itself be a source of one-sidedness, because censors usually suppress evidence on all but their own side. Censorship and propaganda usually go together, because propaganda is more effective when it is unchallenged, and one-sidedness is a common propaganda technique―it is sometimes called "card stacking" in the context of propaganda.

Moreover, one-sidedness is one of the most difficult fallacies to guard against, because it involves suppressing contrary evidence. How can you know when an arguer simply leaves out an important piece of evidence because it would hurt his case? If you don't happen to already know it, then you will probably have to hear it from someone on the other side of the issue.

One remedy for one-sidedness is a free marketplace of argumentation, so that the suppressed evidence can be obtained elsewhere. But what if censors have the power to suppress all such evidence, making it unobtainable? By attempting to get rid of fallacious arguments through censorship, one may simply suppress some fallacies at the cost of unleashing another, more dangerous one. For this reason, I think that the remedy for fallacious argumentation is not censorship, but for people to educate themselves to recognize fallacies and develop a resistance to them.

May 19th, 2006 (Permalink)

Bush Contextomy of the Day

Here's Slate's latest "Bushism" together with its context:

Bushism Context
That's George Washington, the first president, of course. The interesting thing about him is that I read three―three or four books about him last year. Isn't that interesting? That's George Washington, the first President, of course. The interesting thing about him is that I read three―three or four books about him last year. Isn't that interesting? People say, so what? Well, here's the "so what." You never know what your history is going to be like until long after you're gone. If they're still analyzing the presidency of George Washington―(laughter.) So Presidents shouldn't worry about the history. You just can't. You do what you think is right, and if you're thinking big enough, that history will eventually prove you right or wrong. But you won't know in the short-term.

As usual, Eugene Volokh is on the case and he provides the commentary, so that The Fallacy Files doesn't have to:

Now it strikes me as a little odd that Slate, one of the pioneers of online journalism, doesn't take advantage of one of the great advantages of online journalism over offline journalism―the ability to link to the original sources (either ones that are already online or ones that are put up on the Web by the journal itself), so that readers can see the context for themselves.


Update (5/23/2006): Slate is at it again with two more "Bushism"s, both taken from the same interview as the one above, and again with no links to the source! As usual, Eugene Volokh has analyzed them thoroughly, so read the whole things.

Sources: Eugene Volokh, The Volokh Conspiracy:

May 14th, 2006 (Permalink)

Mother's Pay Day

There's a lot of bunk in the study, released to coincide with Mother's Day, claiming that a working mother's housework is worth $85,876 a year, while a stay-at-home mother's housework is worth $134,121 a year, and both Carl "The Numbers Guy" Bialik and Mark "The Mystery Pollster" Blumenthal debunk parts of it. Bialik criticizes the ways in which these estimates were inflated; and Blumenthal, naturally, concentrates on the self-selected, and therefore probably biased, sample that was used.

It may be an over-reaction to further criticize this study, which is just a Mother's Day publicity stunt, and the salary estimates are so obviously inflated that it's unlikely anyone would take them seriously. However, I will add that the salaries are absurdly precise, since these are at best estimates, and not very good ones at that.


May 12th, 2006 (Permalink)

Blurbwatch: Hoot

Blurb Context
"Hoot" may be warm and fuzzy with its adorable owls, triumphant kids and inviting Florida groves. But its forced, innocuous humor is unlikely to amuse anyone but the very young―and the extremely forgiving.


May 7th, 2006 (Permalink)


Gene Marker May Show Prostate Cancer Risk

Who's Gene Marker, and why do I need to know about his prostate?

'Hardest Rock' Slows Australia Mine Rescue

Why don't they just turn the volume down?

May 6th, 2006 (Permalink)

Publication Bias

Ben Goldacre's latest "Bad Science" column discusses an example of publication bias, which is the tendency for scientific research that shows positive results to get published, while negative or statistically insignificant results are filed away―which is why it is sometimes called "the file drawer effect". This might not seem like a bad thing, especially in the case of research that seems to get no significant results. Editors of scientific journals naturally want to publish attention-getting articles that will be read, as opposed to dull reports of no results.

However, the standard for statistical significance in most scientific research is the 95% confidence level, which means that 1 in 20 studies can be expected to get "significant" results just by chance. For this reason, it's important to have some idea of the total number of studies done of a particular medical treatment or drug, including those which showed no significant result. Such studies may be boring, but they are part of the research context needed to judge whether 95% confidence is confident enough.

The specific example that Goldacre discusses involves publication bias in newspapers for the general reader as opposed to scientific journals, but the same principle applies, since newspaper editors have even stronger motives for publishing attention-getting stories than do journal editors:

You might remember the scare stories about mercury fillings from the past two decades: they come around every few years, usually accompanied by a personal anecdote, where fatigue, dizziness and headaches are all vanquished with the removal of the fillings by one visionary dentist. Traditionally these stories conclude with a suggestion that the dental establishment may well be covering up the truth about mercury, and a demand for more research into its safety.

This is post hoc reasoning, which may be strong enough evidence to justify research into the effects, if any, of mercury fillings, but not strong enough to justify any expensive changes in their use. However, the later research which shows no statistically significant evidence to support the alarm doesn't seem to rate column inches in the paper:

Well, the first large scale randomised control trials on the safety of mercury fillings were published just two weeks ago, and I've been waiting to see these hotly awaited results pop up in the newspapers, but nothing doing so far.

He must not have checked the Guardian!


Previous Month | RSS/XML | Current | Next Month