Logic Checking the State of the Union
Here's an example taken from the President's State of the Union speech of how the "average"―meaning the mean―can be atypical when applied to income, courtesy of the folks at Annenberg:
The president, using an accurate but misleading figure, exaggerated the effect on the typical taxpayer of allowing his tax cuts to expire.Bush: Unless the Congress acts, most of the tax relief we have delivered over the past seven years will be taken away. Some in Washington argue that letting tax relief expire is not a tax increase. Try explaining that to 116 million American taxpayers who would see their taxes rise by an average of $1,800.
… But the average increase would not be typical. The increase would be far smaller, $828, for those in the middle 20 percent of the income scale, with earnings between $27,465 and $48,165 a year in today's dollars. And of course, the increase would be lower still for those with lower incomes. Even for the next-highest 20 percent, with incomes between $48,165 and $85,706, the increase would be $1,309, still well below Bush's "average" figure. But for the top 1 percent, with incomes over $434,766, the tax increase would be $64,154. That's what pulls up the average to well above what ordinary taxpayers would experience.
Source: Brooks Jackson, et al., "Facts of the Union 2008", Annenberg Political Fact Check, 1/29/2008
Resource: "'Average' Ambiguity, 11/4/2002
Logical Literacy: Induction and Deduction
Logical literacy is the minimum knowledge of the basic concepts of logic which an educated person should have. Logical illiteracy―or "illogicality", as I call it―is the lack of this minimum knowledge, and a condition that is all too common. You don't need to be a logician to be logically literate, any more than you need to be a linguist to be literate in a natural language, but what do you need to know?
I have written previously that a logically literate person should understand the phrase "begs the question", which does not mean "raises the question" but refers to a logical fallacy. Another thing that an educated person should know is the distinction between deductive and inductive reasoning:
- Deduction: Successful deduction is reasoning in which the truth of the premisses necessitates the truth of the conclusion, that is, if the premisses are true then the conclusion must be true. Deduction is aimed at the strongest possible logical connection between premisses and conclusion, which is called "validity".
- Induction: Successful induction is reasoning in which the truth of the premisses makes the truth of the conclusion more likely than not, that is, if the premisses are true then the conclusion is more likely to be true than to be false. So, induction aims at a weaker connection between the premisses and conclusion than deduction does. The terminology for induction is less standardized than that for deduction, but I call a strong induction "cogent".
These definitions are standard among contemporary logicians, and have been so for at least half a century and probably longer. Nonetheless, it is still common to come across the following definitions:
- Deduction: Reasoning from the general to the specific.
- Induction: Reasoning from the specific to the general.
Logicians have alternative vocabulary for the distinctions made by the older definitions:
- Specification or instantiation: "going from the general to the specific"
- Generalization: "going from the specific to the general"
What's wrong with the old definitions, and why did logicians adopt the new ones? I'm not sure about the history, but I suspect that logicians of the past may have mistakenly thought that the deductive/inductive distinction was co-extensive with the generalization/specification distinction. For instance, an old-fashioned textbook example of a deductive argument is:
All men are mortal.
Socrates is a man.
Therefore, Socrates is mortal.
This argument is deductive and it moves from the general to the specific, albeit with the help of a specific second premiss. Similarly, the usual examples of inductive arguments tend to be generalizations, for instance:
This swan is white.
That swan is white.
That other swan is white.
Therefore, all swans are white.
However, many deductive arguments do not go from the general to the specific. Some go from the general to the general, for instance:
All men are people.
All people are mortal.
Therefore, all men are mortal.
Some go from the specific to the specific, for example:
Socrates is a Greek man.
Therefore, Socrates is a man.
Some even go from the specific to the general; consider the following admittedly artificial example:
Socrates is a man.
Therefore, everything identical to Socrates is a man.
Similarly, not all inductions go from the specific to the general. Some go from the specific to the specific, for example:
This swan is white.
That swan is white.
That other swan is white.
Therefore, the next swan I see will be white.
Some even go from the general to the general, for instance:
All swans can fly.
All geese can fly.
All ducks can fly.
Therefore, all birds can fly.
The conclusion is, of course, false but that doesn't mean that it's not an induction.
The distinction between deduction and induction in their current senses is extremely important because the logical principles governing them are different. In fact, it is one of the most important distinctions in logic, which is why it is a component of logical literacy.
Unfortunately, illogicality on this matter is spread and kept alive by many sources. The current Wikipedia articles on inductive and deductive reasoning are particularly horrid examples. In addition to many other mistakes and passages that make no sense, each article defines its subject in the outdated way. The illogicality is increased by the fact that the article on deduction defines it as "the type of reasoning that proceeds from general principles or premises to derive particular information", but then gives the following example:
All apples are fruit.
All fruits grow on trees.
Therefore all apples grow on trees.
This is, indeed, a deductive argument, but not one that goes from general to specific; rather, it deduces a general principle from two other general principles.
Similarly, the article on inductive reasoning treats both definitions of "induction" as if they mean the same thing:
Induction…is the process of reasoning in which the premises of an argument are believed to support the conclusion but do not ensure it. Induction is a form of reasoning that makes generalizations based on individual instances.
Unfortunately, many other sources of information are nearly as bad as Wikipedia. Here are some better ones:
- "Deductive and Inductive Arguments", The Internet Encyclopedia of Philosophy
- Robert Todd Carroll, "Induction and Deduction: Critical Thinking Lesson 1", Skeptic's Dictionary Newsletter 25, 5/3/2003
Reader Response (1/30/2008):
Shouldn't your definition of "deduction" be "it must be that, if the premises are true, then the conclusion is true" rather than "if the premises are true, then the conclusion must be true"? (Likewise, mutatis mutandis, for the definition of "induction".) The second phrasing doesn't seem correct to me because it implies that the conclusion is necessarily true, when what's actually a matter of necessity is the relationship between the premises and the conclusion.―Vance Ricks
- Wide Scope: Necessarily: if the premisses are true, then the conclusion is true.
- Narrow Scope: If the premisses are true, then the conclusion is necessarily true.
You're right that the wide scope is what I intended. That the scope is ambiguous can be seen by comparing it to the following:
If Buchanan is a bachelor, then he must be unmarried.
This appears to be a true conditional statement, but it also appears that the scope of "must" is the statement's consequent: "Buchanan is unmarried". However, it certainly isn't true that Buchanan is necessarily unmarried, for he could have married. That being the case, it would follow by Modus Tollens that Buchanan isn't a bachelor! So, the scope of "must" is the entire statement, rather than just the consequent.
It's not so obvious that the scope of the "must" in the definition of "deduction" I gave is wide, since in some cases the conclusion of a deductive argument may indeed by necessarily true. However, there are many deductions whose conclusions are contingent.
So, why did I phrase the definition the way I did? Because it's awkward to write the definition with unambiguously wide scope, I opted for a graceful sentence at the risk of possible misunderstanding. Thanks to your raising the issue, it should now be clear that the modalities in both definitions have wide scope.
Check 'Em Out, Too
- (1/23/2008) Apropos of mistakes in evaluating risk, John Tierney has a couple of entries on excessive fear of terrorism. This is a manifestation of the anecdotal fallacy, in which a few vivid incidents cause people to overestimate the likelihood of such events.
- (1/19/2008) Ben Goldacre channels Darrell Huff in his latest Bad Science entry. It's sad that everything in How to Lie With Statistics (HTLWS) holds up, except the dated examples, and those can easily be replaced by new ones. It's sad because, as popular as Huff's book is, it doesn't seem to have put a dent in lying with statistics.
I recently read for the first time Huff's companion book How to Take a Chance, also illustrated by Irving Geis, who did the clever cartoons in HTLWS. How to Take a Chance is a non-technical introduction to probability theory, and a good antidote to the ten ways of misunderstanding risk.
- Ben Goldacre, "The Huff", Bad Science, 1/19/2008
- Darrell Huff & Irving Geis, How to Take a Chance (1959)
- Maia Szalavitz has an excellent article in the current Psychology Today on ten ways that people make errors about risks. Unfortunately, the full article is not available on the web unless you're a subscriber, but you can read a teaser of the first one-and-a-half ways. One of the ways is the anecdotal fallacy, and here's another familiar one:
The word radiation stirs thoughts of nuclear power, X-rays, and danger, so we shudder at the thought of erecting nuclear power plants in our neighborhoods. But every day we're bathed in radiation that has killed many more people than nuclear reactors: sunlight. …
Our built-in bias for the natural led a California town to choose a toxic poison made from chrysanthemums over a milder artificial chemical to fight mosquitoes: People felt more comfortable with a plant-based product. We see what's "natural" as safe….
…When a case report suggested that lavender and tea-tree oil products caused abnormal breast development in boys, the media shrugged and activists were silent. If these had been artificial chemicals, there likely would have been calls for a ban, but because they are natural plant products, no outrage resulted. "Nature has a good reputation," says [Paul] Slovic. "We think of natural as benign and safe. But malaria's natural and so are deadly mushrooms."
Update (9/29/2008): The entire article is now available:
Source: Maia Szalavitz, "10 Ways We Get the Odds Wrong", Psychology Today, January/February 2008. (Added: 10/26/2015) Now you see it, now you don't!
Silly Celebrity Scientologist
Here's Tom Cruise speaking to his fellow scientologists:
Being a Scientologist, when you drive past an accident, it's not like anyone else, it's, you drive past, you know you have to do something about it. You know you are the only one who can really help. … We are the authorities on getting people off drugs. We are the authorities on the mind. We are the authorities on improving conditions. … We can rehabilitate criminals. We can bring peace and unite cultures.
Here's just one example of Cruise's expertise on drugs:
Look at the experimentation the Nazis did with electric shock and drugging. Look at the drug methadone. That was originally called Adolophine. It was named after Adolf Hitler…
Methadone was never named "Adolophine". Rather, it was named "Dolophine", from the Latin word "dolor" for "pain" and "phine" from "morphine", since methadone is similar to morphine. Even if methadone had been named after Adolf Hitler, that wouldn't mean that it's a useless drug. Cruise was trying to discredit it by playing the Hitler card in its guilt by association form.
- "Transcript of Tom Cruise on Scientology video", Times Online, 1/16/2008
- Benjamin Svetkey, "Tom Responds", Entertainment Weekly, 6/8/2005
- James Verini, "Missionary Man", Salon, 6/27/2005
Shmuel Ruppo sends the following comments:
If someone says during a debate "Atheists are evil people", many would classify this as an "ad hominem". But when you look at the sentence, it doesn't say "Atheists are evil people, and therefore there is no god". If that implies something, it is not the former, but "Atheists are evil people, therefore one should not be an atheist". There is no logical fallacy in that at all; given that one should not be an evil person, that is completely valid.
In a similar fashion, when one says "Belief in god causes good results", this is not an "appeal to consequences", because it doesn't say "and therefore there is a god". One might say that it is somehow an un-intellectual attempt to cause us to believe in something because of its good consequences, but there's no logic problem in that.
By definition, to be an atheist is to believe something or, more accurately, not believe something, namely, that there is a god. So, to conclude an argument "you shouldn't be an atheist" is equivalent to concluding that you should believe in a god. However, the only reason given by the argument for believing in a god is a moral one, namely, that atheists are supposedly evil people. This is irrelevant to the ontological question of whether there is a god, just as whether children who don't believe in Santa Claus are worse behaved than those who do is irrelevant to whether there is a Santa. So, the fallacy committed is not ad hominem, but a fallacious appeal to consequences.
Nonetheless, the argument that "all atheists are evil, therefore one should not be an atheist" seems to pack some moral force. Let's call someone who doesn't believe in Santa Claus an "aclausist". Then, consider the argument "all aclausists are evil, therefore one should not be an aclausist", in other words, you should believe in Santa. Isn't there something wrong with this argument? Supposing that it is true that people who don't believe in St. Nick are generally less kind and generous than those who do, is this really a good argument for believing in him? Can one be morally obliged to believe something false?
What is the basis for the claim that all atheists are bad people? Isn't it that they are bad because they don't believe in a god? But what's wrong with not believing in a god? Isn't it generally better to believe truths than falsehoods? So, if there is no god, isn't it better not to believe in one?
The premiss that atheists are evil smuggles in the existence of a god through the back door, that is, it is wrong to be an atheist if and only if there actually is a god. Thus, the argument is really: "There is a god, so all atheists are bad people, hence you should not be an atheist, therefore you should believe in a god." Eliminating the middle men, we get: "There is a god, therefore you should believe in a god." This is a perfectly fine argument, and so is: "there is a Santa Claus, therefore you should believe in Santa." If the premiss is true then, yes, you should believe in Santa. But that's no evidence for the existence of Santa Claus, nor that you should believe in him.
Update (1/15/2008): Coincidentally, mathematician John Allen Paulos' latest book is Irreligion: A Mathematician Explains Why the Arguments for God Just Don't Add Up. I wish someone would send me a review copy. All of the traditional philosophical arguments for the existence of a god are fallacious, though some are complex and subtle enough that it is difficult to figure out exactly what's wrong with them. I've previously posted puzzles based on a couple of these arguments:
Source: John Allen Paulos, "God, Science and an Unbeliever's Utopia", Who's Counting, 1/6/2008
There's no accounting for taste.
A Debate Puzzle
Five politicians participated in a debate for the Republicrat Party's nomination for President. Each politician made exactly three statements that were clear and specific enough to be checked. An independent fact checking organization determined that each candidate made exactly two true statements and one falsehood. Here are those statements:
- Vice President Anderson:
- I never voted to raise taxes.
- Senator Drummond did vote for a tax raise.
- Representative Elgar and I have the same position on abortion.
- Governor Boyce:
- I didn't vote to raise taxes.
- I support comprehensive entitlements reform.
- My opponents all oppose reforming entitlements.
- Mayor Calhoun:
- I did not vote for a tax raise.
- I've never supported a tax increase in my life.
- Senator Drummond voted to raise taxes.
- Senator Drummond:
- I did not vote to raise taxes.
- Representative Elgar did vote to raise taxes.
- Mayor Calhoun lied when he said that I voted for a tax raise.
- Representative Elgar:
- I voted against raising taxes.
- Governor Boyce hiked taxes.
- The Vice President and I have different positions on abortion.
Who voted to raise taxes?
Check 'Em Out
- (1/7/2008) In a related matter, the Numbers Guy's latest entry deals with a change that Iraq Body Count (IBC) is making in how it counts civilian deaths. Previously, IBC required each death counted to be reported by two independent sources; from now on they are going to include deaths reported by only a single source. They seem to be adopting a sort of presumption of innocence for reports of civilian deaths by not requiring any corroboration, and by removing reported deaths from the database only if they are later discredited.
Source: Carl Bialik, "New Approach to Count of Iraqi Civilian Deaths", The Numbers Guy, 1/7/2008
- The National Journal has a lengthy and worthwhile article about the Lancet Iraq mortality surveys that I've criticized here before: see the Resources below. Most of the information in the article I already knew about, but there are a couple of items that are news to me. It also does a good job of making the case for skepticism about the studies, but it doesn't show them to be fraudulent, as some have suggested. Here are two new issues raised by the article:
- If I had been aware that the Lancet researchers have failed to share their data with others, I would have criticized that failure previously. The excuse for not doing so is security, that is, that the data might contain clues to the identities of respondents that would endanger them. This is certainly plausible, but it's like the proverbial "the dog ate my homework" excuse: even if true, the homework was not handed in and so cannot be graded. Whatever the reason, independent researchers cannot check the data themselves and the studies' credibility must, therefore, be undermined to some degree.
- I knew about and criticized the political bias of the researchers which led to their attempts to influence the elections, but the article adds some further details. This bias is now of greater concern because of the previously-noted problem of withheld data, since the researchers are asking us to accept their data on trust. If we can't trust the researchers, then we can't trust the data; and if we can't trust the data, then we can't trust the studies. Garbage in, garbage out.
Source: Neil Munro & Carl M. Cannon, "Data Bomb", National Journal, 1/4/2008
- "October Surprise?", 10/30/2004
- "Update on the Lancet 100,000", 5/14/2005
- "October Surprise II: The Lancet Strikes Back", 10/12/2006
Update (1/9/2008): A new household survey of Iraq, known as the "Iraq Family Health Survey" (IFHS), has just been released. It is a much larger survey than either of the Lancet surveys, with five times as many households interviewed as Lancet II (L2), which was a larger survey than Lancet I. The IFHS concludes that violent deaths in Iraq from the beginning of the war to mid-2006 fall in the interval 104,000-223,000 with 95% confidence. Compare this to the L2 confidence interval of 426,369-793,663 violent deaths for the same time period.
I haven't had time yet to read the report carefully, as it was just released today. However, assuming that it holds up, the weight of evidence has now accumulated to the tipping point that should take us from skepticism to the conclusion that something went wrong with the Lancet surveys. Hopefully, the Lancet researchers can be persuaded to release their data―perhaps when security in Iraq has sufficiently improved―and a study of it may reveal what that something was.
- Iraq Family Health Survey Study Group, "Violence-Related Mortality in Iraq from 2002 to 2006", New England Journal of Medicine, 1/9/2008
- "Iraq Family Health Survey―Mortality Study Q & A", World Health Organization, 1/9/2008 (PDF)
Here's Bonnie Erbe on the polls out of Iowa:
So much for belief in polls: Just two days before tomorrow's caucuses, two major political polls taken in Iowa were released showing very different results for the two Democratic front-runners. … The first poll, by CNN, revealed the following results: "Among Democrats, Sen. Hillary Clinton of New York wins the most support, with 33 percent of likely Democratic caucus-goers backing Clinton and 31 percent supporting Sen. Barack Obama of Illinois. But taking into account the survey's sampling error of 4.5 percentage points in the Democratic race, the race is virtually tied." Much to CNN's credit, the difference in this poll is duly noted as being within the statistic[al] margin of error. But then compare those results with this poll and the much larger gap, which beat the margin of error in the opposite direction: "A new poll by the Des Moines Register newspaper shows Democratic presidential hopeful Senator Barack Obama ahead of Senator Hillary Clinton in Thursday's Iowa caucuses. The poll indicates Obama is supported by 32 percent of likely Democratic caucus-goers, while Clinton has 25 percent support and former North Carolina Senator John Edwards 24 percent. The newspaper says its telephone survey involved 800 likely Democratic caucus-goers, with a sampling error of plus or minus 3.5 percentage points."
One danger with misunderstanding the margin of error (MoE) of polls is that people take seriously poll results that are insignificant, but another is that they will become cynical and reject all poll results when they seem to conflict with the results of an election, or with each other.
The two polls that Erbe thinks conflict really don't when you take the MoE into consideration. Erbe points out the insignificance of Clinton's lead in the CNN poll, but fails to realize that Obama's lead in the Register poll is also within the MoE, because the MoE applies to both candidates' results. So, with a MoE of 3.5 percentage points, a lead needs to be over 7 points to be significant. Obama's lead in the Register poll is 7 points and, therefore, not significant. However, it's on the borderline of significance and may well represent a real lead.
So, what should the polls lead us to expect tonight? (I'm writing this before the Iowa caucus results are in.) The poll results have been quite consistent when you take the MoE into consideration: they show that for both parties, the candidates can be divided up into two groups, let's call them "the contenders" and "the also rans". On the Democratic side, the contenders are Clinton, Obama, and Edwards; and the also-rans are everyone else. In most polls, the contenders are within the MoE of each other, but the also-rans are significantly far behind. In some polls Clinton leads, in others Obama, and Edwards in one, but seldom if ever significantly. So, the polls show that any of the contenders could win, but what would be surprising is if any of the also-rans did so. That should really call the polls into question and make us wonder what went wrong.
- "2008 Democratic Presidential Primary Polling Data", Pollster.com (PDF)
- Bonnie Erbe, "Conflicting Polls", To the Contrary, 1/2/2008
Resource: How to Read a Poll, Fallacy Watch
Update (1/4/2008): It appears that the contenders won: for the Democrats, Obama beat out Clinton and Edwards, who are virtually tied for second. Huckabee and Romney were the contenders for the Republicans, and Huckabee seems to have triumphed. So, the results are consistent with what the polls showed, if not with much of the reporting of the polls.
Source: James Rowley & Catherine Dodge, "Obama and Huckabee Are Winners in Iowa Caucuses (Update2)", Bloomberg, 1/3/2008
Correction (10/26/2015): The analysis of the example, above, used the method of comparing the difference between candidates' levels of support with twice the MoE. This is not a bad rule of thumb, and in the case of the first poll examined it gives the correct result, namely, that the difference between Clinton and Obama's support was not significant at the 95% confidence level. However, in the case of the second poll, the difference between Obama and the other candidates was in fact significant at the 95% level. This is, of course, consistent with the actual results of the caucus, as indicate in the Update, above. The main point of the entry is still correct, namely, that the seeming contradiction between the two polls was an illusion because Clinton's apparent lead in the first poll was not significant. To see how to calculate the MoE in cases such as these, check the Source, below.
Source: Rebecca Goldin, "Presidential Polling’s Margin for Error", STATS, 10/14/2015
Solution to A Debate Puzzle: Governor Boyce