Thanks to everyone for supporting the Fallacy Files, whether by making purchases through Amazon, visiting advertisers, sending in questions and comments, or just reading it! You help to keep this website and weblog going strong and growing stronger.
The poor adwriter for the new movie "Eragon" was faced with a problem: its Metacritic score is only 38 out of 100, denoting "generally negative reviews", and its Tomatometer rating is a "rotten" 13%. How do you come up with some good blurbs? Quoting out of context to the rescue! The critics didn't like the movie much, but they were impressed by the computer-animated dragon. So, the adwriter quoted the favorable comments about the dragon out of context as if they applied to the movie as a whole:
A MAGNIFICENT CREATION!"
|As flimsy as it is…"Eragon" has one saving grace: Its first-time director, Stefen Fangmeier, is an Oscar-nominated special-effects wiz…and the visuals are imaginative, especially its signature effect of a boy soaring through the sky on a dragon's back. The dragon itself…is a magnificent creation…|
A FANTASTICAL ADVENTURE."
|…[T]he most noteworthy character in Eragon is Saphira [the dragon]. The creature…is the undisputed star of a film that is predictable in its plot, clichéd in its dialogue, but stunning in its cinematography and production design. … It's a pleasant enough fantastical adventure, but it does feel naggingly derivative.|
- Ad for "Eragon", The New York Times, 12/29/2006, p. B16.
- William Arnold, "All that's missing are the hobbits", Seattle Post-Intelligencer, 12/15/2006
- Claudia Puig, "Cinematic Quest Falls a Little Short", USA Today, 12/15/2006
- (12/29/2006) The base rate fallacy! As promised, this is a subfallacy of probabilistic fallacy, which I've also revised to include Bayes' Theorem. I've discussed the base rate fallacy more than once on this weblog, so I thought that it was about time that it received its own entry.
- I've added a new example to the "Stalking the Wild Fallacy" examples page, this one taken from an episode of comic magicians Penn and Teller's "B.S.!" show. I expected to hear fallacious arguments from the subjects of the shows―and I did!―but I was surprised to hear one from the host who talks! And a formal fallacy, no less! I enjoy watching most of these shows, but their approach seems to be one of fighting fallacies with fallacies. It can be entertaining, but I don't think that mockery and verbal abuse do much to advance the skeptical cause. It makes skeptics―at least as represented by P&T―look unreasonable.
Q: T. Edward Damer has a list of twelve guidelines for communication and the team of Von Eemeren and Grootendorst have a similar one with ten "commandments". I also notice that the various authors that explain fallacies do not categorize them in the same way. It makes sense to me to categorize them in accordance to the rule of communication they violate. In the case of the two differing guidelines for communication, they overlap to a large degree, therefore they should more or less agree on the fallacy that violates it. Do you think this might lead to a more standardized method of handling the "taxonomy" of fallacies?
One thing I noticed is that when I am engaged in a disagreement, when I recognize that there are problems in clarity, or the other position is engaging in minimization, then there usually turns out to be a fallacy lurking about in there. It really does help me to shut down faulty reasoning quickly by associating the fallacy to the guideline violation, therefore it seems to me to be a valid method of organizing the hierarchy of fallacies.
A: Lee, you've raised a complicated issue and, as a former professional taxonomist, one that I'm intensely interested in! There's much to be said about it; in fact, too much to be said. So, here are some brief―perhaps too brief―responses:
I don't think that there is only one way to categorize fallacies, since there are different dimensions along which they may be categorized. Moreover, Von Eemeren and Grootendorst's ten rules and Damer's twelve principles are aimed primarily at informal fallacies rather than formal ones, and all of the fallacies discussed in Damer's textbook are informal. In contrast, the Fallacy Files Taxonomy is an attempt to classify all logical fallacies, formal as well as informal.
Formal fallacies are in fact good examples of fallacies categorized according to the rules they violate. This is most obvious in the case of syllogistic fallacies, and the entries in the files for these fallacies actually mention the rule violated. However, other formal fallacies, such as propositional ones, are plausibly classified as misapplications of rules; for instance, affirming the consequent can be seen as a failed attempt at modus ponens. For this reason, I include in the entries for these fallacies validating forms which are similar in structure. However, even in the case of formal fallacies there is no one set of rules for generating fallacies. So, if we take fallacies to be rule violations, what fallacies we get will depend upon what set of rules we start with, and there are multiple starting points.
Even though V.E. & G.'s approach is narrower than the Taxonomy, they obviously overlap in the case of informal fallacies. An advantage of V.E. & G.'s rules is that some traditional fallacies that don't easily fit into the Taxonomy violate one of V.E. & G.'s rules. For instance, the appeal to force is a difficult fallacy to explain on narrowly logical grounds, since it seems to abandon reason for violence. Since resorting to violence is not an argument at all, it doesn't seem to be a type of fallacious argument. (See the "Exposure" section of the entry for this fallacy for further discussion.) In contrast, appeals to force violate V.E. & G.'s rule 1 against preventing the other side in a debate from having its say, which seems to be a reasonable restriction on the conduct of a discussion.
However, V.E. & G.'s approach is too coarse-grained an analysis for most traditional fallacies. For instance, all inductive fallacies appear to fall under rule 7. Because V.E. & G.'s rules don't consider the internal logical structure of arguments, they cannot discriminate between most types of fallacy. So, the Taxonomy casts a wider net than both V.E. & G.'s rules and the taxonomic mesh is much finer.
Of Damer's twelve principles, he uses only four as criteria for categorizing fallacies: relevance, acceptability, sufficient grounds, and rebuttal. This is even coarser than V.E. & G.'s ten, though Damer does include a number of subcategories. So, the subcategories are not based on the criteria, but on the internal structures of the fallacies. Moreover, I don't see much overlap between the two sets of rules, and I think that there are serious problems with two of Damer's criteria:
- Acceptability: I don't accept this as a criterion for logical fallacies. The truth or "acceptability" of premisses is seldom a logical question; instead, it is usually a question for some other science. There is one fallacy, namely, the black-or-white fallacy―which Damer calls "False Alternatives"―which is otherwise difficult to categorize, and fits naturally into this category. Most of the other fallacies that Damer includes in this category could just as well fall elsewhere. It seems like theoretical overkill to have such a broad criterion just to rope in one stray fallacy.
- Rebuttal: I would rebut this criterion in the following way: all of the fallacies listed as violations, except for the Fallacies of Counterevidence, are better understood as violating the relevance criterion. Moreover, the Counterevidence Fallacies seem to be violations of the inductive rule of total evidence. Once again, it seems that a cannon is being used to kill a canary.
The logical and dialectical approaches may currently disagree on some fallacies, but they should eventually supplement and reinforce one another. Other ways of classifying fallacies also need to be developed, in particular the psychological approach. Psychologists have only begun to study how people reason, and how they make mistakes in doing so, and it's likely that a classification of fallacies in terms of their psychological mechanisms will be quite different than either the logical or dialectical categories. In the long run, I expect that there won't be some one way of classifying fallacies, but multiple ways which connect and support each other.
- T. Edward Damer, Attacking Faulty Reasoning: A Practical Guide to Fallacy-Free Arguments (Third Edition) (1995).
- Frans H. Van Eemeren & Rob Grootendorst, "The Pragma-Dialectical Approach to Fallacies", in Fallacies: Classical and Contemporary Readings, Hans V. Hansen & Robert C. Pinto, editors (1995).
Longtime readers of the Fallacy Files may recall that around this time of year I used to post a mocking critique of the famous "Yes, Virginia, there is a Santa Claus" editorial. They may also have noticed that it has yet to make an appearance this year. You won't be seeing it! I used to think that the evidence for old St. Nick was weak, and even dared to deny his existence. However, I have now discovered a proof of Santa Claus' existence. Consider the following sentence, which I will call "sentence S":
S: Either Santa Claus exists or S is false.
Now, either S is true or it's false. But it cannot be false, for the following reasons: S is a disjunction and its second disjunct says that S is false. So, if S is false then its second disjunct is true. But a disjunction with a true disjunct is itself true. So, if S were false then it would be true, which means that S must be true.
So, S's second disjunct is in fact false, since it says that S is false. Thus, S's first disjunct must be true, by disjunctive syllogism. But its first disjunct says that Santa Claus exists. Therefore, Santa Claus does exist! Put out the milk and cookies!
Is this a proof of Santa's existence? Of course, the conclusion goes against common sense, as well as common opinion. But so did Copernicus, Galileo, Einstein, Francis P. Church, and other great thinkers! What, if anything, is wrong with the proof?
Check 'Em Out
- (12/13/2006) The current issue of Skeptic magazine has an article by Charles Lambdin on the supposed "Fallacy of the Golden Mean". Unfortunately, it doesn't appear to be available online, so you'll have to find a paper copy if you want to read it.
According to the article, this is the fallacy of arguing that the truth lies somewhere between two extreme positions, as a sort of "mean", average, or compromise of the available views. Lambdin is in good company in identifying such a fallacy: D. H. Fischer called it the fallacy of "argument ad temperantiam" (translated: "argument to temperance, or moderation"), as does Madsen Pirie, and Robert Gula discusses it under the names "false mean" or the "fallacy of compromise".
However, I'm skeptical myself as to whether this is a logical fallacy, though it may be a mistake of some other type. It seems plausible that―at least in certain areas, such as politics―the truth is often found somewhere between two extremes. No doubt this isn't invariably the case, and it may not be at all true in other areas―such as science. In politics, some people seem to be attracted to simplistic views, which are invariably extreme; we even have a word for them: "extremists". Whenever we have a scale―such as that from complete pacifism to the belief that war is a good thing in itself, or from anarchism to dictatorship―given that the ends of the scale are likely to attract people with extremist tendencies, the truth will probably lie somewhere in between.
To argue that a certain view is right simply because it's somewhere between extremes on some scale is likely to be a very weak form of argument, but that doesn't necessarily mean that it's fallacious. As long as such an argument isn't completely lacking in force, and we don't overestimate its strength, then it may be better than nothing. So, as a type of argument, the appeal to moderation may best be conceived as a rule of thumb. As with other rules of thumb, it would only be useful as a first approximation to the truth, and may be limited in application to specific topics. Thus, to use the argument as if it were conclusive, or to apply it to fields in which it doesn't belong would be a mistake.
Another defect of the article―as of so much writing about fallacies―is the dearth of examples, especially real-life ones as opposed to the concocted kind. Only two possible examples come to my mind, one a story by Raymond Smullyan:
Once upon a time two boys found a cake. One of them said: "Splendid! I will eat the cake." The other one said: "No, that is not fair! We found the cake together, and we should share and share alike; half for you and half for me." The first boy said, "No, I should have the whole cake!" The second said, "No, we should share and share alike; half for you and half for me." The first said, "No, I want the whole cake." The second said, "No, let us share it half and half." Along came an adult who said: "Gentlemen, you shouldn't fight about this; you should compromise. Give him three-quarters of the cake."
This tale shows that it can be unfair or unjust to compromise with extremists such as the first boy: compromise as a way of settling disagreements may in fact encourage extremism by rewarding those who stake out immoderate positions at the expense of those who are more reasonable to start with. In this way, compromise may actually undermine consensus in the long run by driving people farther apart.
Another possible example is Barry Goldwater, who was criticized for being an extremist when he ran for president, and famously replied: "Extremism in the defense of liberty is no vice. And…moderation in the pursuit of justice is no virtue." However, I think what worried people about Goldwater was that he might start a nuclear war with the Russians "in defense of liberty", Lyndon Johnson having stoked such fears with his infamous "daisy" television campaign ad. This was probably an unfair criticism of Goldwater, but surely there are certain extremes, such as nuclear war, that we should not be willing to go to even in the defense of liberty or the pursuit of justice. So, while I think that Goldwater's defense of extremism is eloquent, it doesn't stand scrutiny.
Lambdin's article is listed under the heading "Anatomy of a Fallacy", so this may be a continuing feature. If so, it bears watching.
- David Hackett Fischer, Historians' Fallacies: Toward a Logic of Historical Thought (1970), pp. 296-297.
- Robert J. Gula, Nonsense: A Handbook of Logical Fallacies (2002), p. 102.
- Charles Lambdin, "Anatomy of a Fallacy: Fallacy of the Golden Mean", Skeptic, V. 12, No. 4 (2006), p. 10.
- Madsen Pirie, The Book of the Fallacy: A Training Manual for Intellectual Subversives (1985), pp. 162-164.
- Raymond Smullyan, This Book Needs No Title (1980), p.56.
- Ben Goldacre's latest "Bad Science" column is about the base rate fallacy, though he doesn't call it by name. Discussing medical tests, he writes:
[I]f you look at it from the perspective of the person being tested, the maths gets slightly counterintuitive. Because weirdly, the meaning, the predictive value, of a positive or negative test that an individual gets, is changed in different situations, depending on the background rarity of the event that the test is trying to detect. The rarer the event in your population, the worse the very same test becomes.
The "background rarity" of an event is the "base rate". It's often counterintuitive that you need to take base rates into consideration when judging the probability of events, and the failure to do so results in the fallacy.
Coincidentally, I'm now finishing up an entry for the base rate fallacy and will soon add it to the files.
Source: Ben Goldacre, "It's not so easy to predict murder―do the maths", Bad Science, 12/9/2006
Technical Appendix (Warning: "Viciously complicated maths" ahead!): If you're curious as to how Goldacre arrived at the results discussed in the last paragraph, just plug the numbers he gives into the Bayes' Theorem calculator linked to below as a Resource and apply the instructions for the use of the calculator to Goldacre's examples. The tricky part is figuring out what the probability of a "false alarm" is, but this is simply 1 - the specificity of the test, so in this case it is .25. Goldacre gives the results in terms of false positives, that is, cases in which the hypothesis is not true even though the test was positive, which is 1 - P(H|D).
Resource: Bayesian Calculator
The bottom story of the day:
Missing Man's Pants Found
The mystery deepens:
Guitar Found in Man's Pants
The Fallacy in the "Proof" of Santa Claus' Existence (12/26/2006): No one was able to explain exactly what fallacy was committed, which is not surprising since the "proof" is based on a variant of a paradox which has confounded philosophers for thousands of years. According to reader Don Rider, it even confuses computers:
The problem comes when we extend the scope of "S is false" to include itself. Then, we end up with a "this statement is false", where a statement declares itself to be false. As we know from watching old Star Trek episodes, the mere mention of a statement like this causes even future supercomputers to implode in a shower of self-negating logic. Truth be told, we're taking a huge chance even uttering such a thing, as this could cause our very civilization to crumble should some overly curious computer get a look at it! I mean, if 24th century computers cannot withstand this logical assault, what chance do mere 21st century machines have?
Now you tell me! Though nobody spotted the fallacy, a couple were able to show that the supposed proof was in fact fallacious. For example, Erik Edlund wrote:
Proof to the contrary:
S: Either Santa Claus doesn't exist or S is false.
Now, either S is true or it's false. But it cannot be false, for the following reasons: S is a disjunction and its second disjunct says that S is false. So, if S is false then its second disjunct is true. But a disjunction with a true disjunct is itself true. So, if S were false then it would be true, which means that S must be true. So, S's second disjunct is in fact false, since it says that S is false. Thus, S's first disjunct must be true, by disjunctive syllogism. But its first disjunct says that Santa Claus doesn't exist. Therefore, Santa Claus doesn't exist! Can I have the milk and cookies?
Yes, indeed, Erik deserves milk and cookies for this answer! So, too, does Daniel Cadenas, who made much the same point. This counterargument shows that there must be something wrong with the original argument, though it doesn't show exactly what. This is an example of a technique known as "refutation by logical analogy", which is a form of argument by analogy. To show that an argument is not formally valid it suffices to find an argument with exactly the same form that is obviously invalid―say, by having true premisses and a false conclusion.
So, by constructing a counterargument, we can see that there is something wrong with the alleged proof, but what is it? As I previously suggested, this type of paradox is controversial, but the following is my understanding of where it goes wrong.
The fallacy is committed by the claim that either S is true or it's false. This is a black-or-white fallacy as there is a third possibility, which is correct in this case: S is neither true nor false. The assumption that it is either true or false leads to the false conclusion that Santa Claus exists, so this is a reductio ad absurdum of that assumption.
We tend to assume that every sentence in the indicative mood must have a truth-value, but examples such as S show that this is incorrect. Why, exactly, sentences such as S do not have truth-values is a long story, which I may tell elsewhere―see Smullyan's book listed below for an explanation if you're in a hurry.
John Congdon deserves an honorable mention for spotting a second fallacy:
The invocation of Copernicus, Galileo, Einstein, Church, et al. is a nice bit of Undistributed Middle: "These thinkers went against common sense and opinion, but they were right; this argument goes against common sense and opinion, therefore it is right." Steven Dutch calls this "the Galileo Fallacy" and notes a variation common in education: "Einstein didn't do well in school, therefore any kid who does poorly in school is like Einstein."
Or, as I've heard it put: they laughed at the Wright brothers, they laughed at Edison, they laughed at Marconi, and they also laughed at Bozo the Clown. Congratulations to Don, Erik, Daniel, and John! Santa will bring you your prizes!
Source: Raymond Smullyan, What is the name of this book? The Riddle of Dracula and Other Logical Puzzles (1978). The Santa "proof" is a variant of ones given on pages 202-204. Also, see this book for further discussion of the paradox and what's wrong with it.