On this day of thanksgiving, thanks to all of those who have supported The Fallacy Files since I last thanked you―which isn't often enough! Special thanks to those who have donated directly via the PayPal button to the right! Thanks also to everyone who clicked on a Google ad!
It's not too late. With the holidays approaching, please consider doing any shopping at Amazon by way of one of the links from this website. It won't cost a penny extra and will benefit the site. Thank you all for helping to keep The Fallacy Files strong and free!
A Thanksgiving Family Feast Puzzle
This Thanksgiving, instead of sitting down in front of a television after the big meal and watching a football game, why not cozy up to a holiday-themed logic puzzle?
To celebrate Thanksgiving, Gretchen invited various family members to her house for dinner. Some invitees had prior commitments and sent their regrets, but six of her relatives attended, including Frieda. Each guest, including Eric, brought one item for the family meal, while Gretchen provided drinks. The following are eight facts about the dinner:
- Allie is an only child who did not bring the roast turkey.
- Bill is neither Gretchen's paternal grandfather nor her only brother, and he did not bring either the stovetop stuffing or the green bean casserole.
- Gretchen's visiting cousin did not provide the turkey.
- Carla brought either the candied yams or the mashed potatoes.
- Gretchen's favorite niece didn't bring the stuffing.
- David is Gretchen's only son but he didn't bring the turkey or the stuffing.
- A female family member was responsible for bringing the mashed potatoes.
- Gretchen's favorite aunt brought homemade cranberry sauce.
Can you determine the familial relationships to Gretchen of each family member who attended, as well as the dishes each brought to the Thanksgiving dinner?
Sanity Check it Out
It's time once again to check the "sanity" of a number that, in this case, is found on some websites. Here is a quote from one such site:
…[M]ore than four million women are battered to death by their husbands or boyfriends each year [in the United States].
Is this a plausible number? How would you go about checking it by using what you already know as opposed to doing research? In other words, check the plausibility of the claim rather than simply accepting it. To get the most benefit from this exercise, don't just evaluate the number for plausibility, but make a case for your evaluation. When you're done, click on the link below to see one such check:
Previous Entries in this Series:
- Caveat Lector, 6/25/2007
- The Back of the Envelope, 5/29/2008
- Who is Homeless?, 3/30/2009
- The Back of the Envelope, 7/2/2009
- The Back of the Envelope, 8/15/2009
- BOTEC, 11/14/2010
- How Not to Do a "Back of the Envelope" Calculation, 11/18/2010
- BOTEC, 2/6/2011
- Go Figure!, 3/6/2011
- Be your own fact checker!, 2/15/2012
- Check Your Sanity!, 10/22/2014
Poll Watch: A "new numerical low"
Gallup: 'New numerical low' for Obamacare
The above is a recent headline from a Politico article―see Source 1, below, and read the whole thing: it's short! What is a "new numerical low"? According to the article:
Support for Obamacare continues to decline, with the law hitting a new low in approval, and a new high in disapproval, as the second enrollment period has opened for Americans, according to Gallup. Just 37 percent approve of the Affordable Care Act, 1 percentage point less than the previous low recorded in January, Gallup found in a new survey released Monday. The pollster notes the approval results are a “new numerical low” for Obamacare. … A majority of Americans disapprove of Obamacare, at 56 percent―a new high, Gallup said.
If you're a savvy poll-watcher, the fact that the drop in the approval rating is only one percentage point should set off your internal alarm. A single percentage point is never a significant result in a public opinion survey, and most national polls have a margin of error (MoE) of plus-or-minus three percentage points. The largest such polls have a MoE of around plus-or-minus two percentage points, so that even in the largest polls this would not be a significant change. The article ends:
The Gallup poll was conducted Nov. 6-9 and surveyed 828 adults. It has a margin of error of plus or minus 4 percentage points.
So, a one-percentage-point drop is well within the MoE, and not a statistically significant result. This is presumably why Gallup, in its own article on the poll―see Source 2, below―referred to this as a "new numerical low", that is, to distinguish the drop from a significant result. Such a small change is not just statistically insignificant, it's not practically significant or "significant" in any other sense of the word. It's possible that this is the start of a downward trend in the approval rating, but it's just as possible that it's the result of statistical noise. The only way to tell will be to check future polls. As Gallup's article goes on to say: "…with approval holding in a fairly narrow range since last fall, it may be that Americans have fairly well made up their minds about the law…".
I've never heard the phrase "new numerical low" before, and a web search produces results either reporting on this poll or unrelated to polling. The word "numerical" in the phrase appears to play a similar role to the word "nominal" in discussing prices. A "nominal" price is one that hasn't been adjusted for the effects of inflation. Similarly, a "numerical low" appears to be one that doesn't take into account the MoE, thus treating an insignificant result as if it means something. A price that has been adjusted for inflation is called a "real" price, in contrast to a "nominal" one. We should make a similar distinction between "numerical" lows and "real" ones in polling results.
- Lucy McCalmont, "Gallup: 'New numerical low' for Obamacare", Politico, 11/17/2014
- Justin McCarthy, "As New Enrollment Period Starts, ACA Approval at 37%", Gallup, 11/17/2014
Resource: How to Read a Poll
Check 'Em Out
Psychologist Barbara Drescher has two interesting articles on the Skeptic Society's new "blog" Insight that discuss the difference between intelligence and rationality―see the Sources, below. I have drawn a similar distinction between the intelligence spectrum from stupid to smart, and the "wisdom" continuum from foolish to wise. Smart people, such as the physicist whose story Drescher tells, can be foolish. This is not a contradiction though it might seem so, because intelligence is not the same thing as wisdom. As a result, we shouldn't assume that people who have foolish, irrational beliefs are, therefore, stupid.
My experience with Mensa was similar to Drescher's, except that I never actually joined. When I wrote to the organization and expressed interest in joining it sent me an envelope like that received by Drescher, which dampened my enthusiasm.
The second of Drescher's articles is the more important one because there she discusses what rationality is, and how it can be improved. Unfortunately, you probably can't do much to raise your native intelligence, but you can learn to be more rational by changing or improving your dispositions. Much foolishness results from lazy thinking and, while you may not be able to improve your native intelligence, you can become a less lazy thinker. Sometimes the slow but determined turtle beats the fast but lazy rabbit.
The problem about whether a married person is looking at an unmarried one that Drescher gives in the second article is one I've dealt with before, including two puzzles based on it―see the Resources, below.
Sources: Barbara Drescher,
- "Why Smart People Are Not Always Rational", Insight, 10/21/2014
- "More On Why Smart People Are Not Always Rational", Insight, 10/24/2014
- Are you intelligent but irrational?, 11/11/2009
- The Puzzle of the Terrorist Acquaintance, 11/15/2009
- The Second Puzzle of the Terrorist Acquaintance, 12/1/2009
There's a new Skeptoid podcast by Craig Good about how to read, watch, or listen to the news with appropriate skepticism―there's also a transcript in case you'd rather read it. Go read or listen to the whole thing―it's short!―then return here as I have a few additional comments and amplifications. See the Source, below. I'll wait.
Oh, you're back! What took so long? Anyway, here are my comments, keyed to some of Good's section headings:
- Exaggerated Frequency: This point relates to what psychologists call the "availability" heuristic, which refers to the fact that we tend to judge the probability of a type of event based on how easy it is to remember an example. As a result, if a kind of event is frequently reported in the news media, it may make it easier to remember such an event, giving us an exaggerated sense of its frequency.
There's an old saying in the news business: "Dog bites man isn't news; man bites dog is news." In other words, the news media tend to report the extraordinary rather than the ordinary. So, the news may give us a sense that certain types of rare but highly-reported event―the "man bites dog" events such as those Good mentions―are more common than they are. This can skew our sense of risk, making it appear that commercial air travel, for instance, is more dangerous than traveling by automobile, because every commercial air crash is widely reported while most car wrecks are only local news. Similarly, parents may have excessive fear of their children being kidnapped by strangers, which is a rare type of event that is given great media attention when it happens.
- Emotions: Good points out that the news media like to report on things that arouse emotional reactions, including the appeal to fear, which Good calls "argumentum in terrorem". Of course, this is another reason why plane crashes and kidnappings get so much attention. The only thing that I would add is that in Latin the appeal to fear is usually called "argumentum ad metum".
- False Balance: If you examine the alphabetical list of fallacies to your left, you'll see that there is no fallacy of "false balance" or "argument to moderation" included. I've had my say about this alleged fallacy elsewhere―see the Resource, below, if you're interested. That said, many reporters seem to have problems with being objective―see the next section, for more on objectivity―and attempt to be "fair" or "balanced", instead. This seems to act as an excuse for lazy reporting since, instead of trying to find out what happened, the reporter can simply interview representatives of "both sides". This may sound like "fairness", but not every issue has two sides.
Some issues have only one side, for instance, the fact that the earth is spherical. Giving any time at all to a flat-earther is the type of false balance that Good seems to have in mind, but there are other kinds that are also common. Some issues have more than two sides, and forcing the issue into a debate between "pro and con" over-simplifies it. Another type of false balance occurs when reporters seek out a skeptic or scientist to present a single quote in a lengthy story, which happens frequently in reporting on pseudo-scientific topics. A brief appearance by a token skeptic does not make such a story "balanced".
- Bias: The notion of objectivity in the news may indeed be a "fairly recent marketing ploy", to quote Good―I think that he is here referring to the historical fact that non-partisan newspapers were a 19th-century invention. However, this doesn't mean that they were merely a marketing ploy that had no effect on how the news was gathered or disseminated. Good is here in danger of encouraging a genetic fallacy, that is, concluding that the historical origin of objectivity in the news reveals what it really is in its essence, or that it must be the same now.
While it may be true that every news source has some degree of bias, that degree can range from minor to major. As Good remarks later, there are reporters and news outlets that manage to be more skeptical and unbiased than others. So, we should indeed seek out the least biased sources for news that we can find. However, because of the ever-present possibility of bias, we should always "trust but verify" even the best news outlets.
- Editing: Good here alludes to a point that I think deserves greater emphasis, namely, that bias in the news is often more a matter of what is not reported than of what is. It's very difficult to know whether important facts have simply been left out of a news report, and all that you can usually do is to consult multiple sources, especially those with different biases.
Source: Craig Good, "A Skeptical Look at the News", Skeptoid, 11/4/2014
Resource: Check 'Em Out, 12/9/2006
Sanity Check: This example comes from Joel Best's book Stat-Spotting, in a section where he discusses the importance of "statistical benchmarks", that is, simple statistical facts that can be used to check claims. One such benchmark is the population of the United States: it isn't necessary, or even possible, to know precisely, but knowing that it's somewhat more than 300 million comes in handy.
This means that four million is slightly more than 1% of the total population of the country. Also, given that about half the population is female, four million is a little more than 2% of the female population. If two out of a hundred females were dying every year at the hands of their husbands or boyfriends, don't you think that this would be a national scandal? Not only that, but some of those females are little girls too young to marry or have boyfriends, so that the percentage of adult women being killed would probably be at least 3%.
The above considerations show that the claimed number is implausibly high, but it doesn't prove that it is false. It's still possible, if only barely, that four million women are battered to death by the men in their lives every year in the United States. However, another statistical benchmark is that about two-and-a-half million Americans die every year, which is sufficient to show that the four million claim must be false.
The actual number of women killed by male partners each year in the U.S. is rather difficult to determine, but it appears to be in the thousands rather than millions. In other words, the four million number is exaggerated by three orders of magnitude. This raises the question: how could an impossible number have ended up on even less-than-reputable webpages?
I can't be certain, but I think this is an example of what Best has called a "mutant" statistic, that is, a statistic that has become garbled through repetition, as in the game of "telephone". In researching this issue, I've discovered that many websites cite four million as the number of American women who suffer some form of domestic abuse each year. Now, I don't know whether this is correct, but it's certainly a far more plausible claim. So, I suspect that what happened is that someone somehow mistook that statistic as the number of women murdered, rather than abused.
This numerical claim can still be found on a few webpages, but I won't link to any of them. If you really want to check it out for yourself, which is something that I encourage, you can do so by searching on the quote itself.
Sources: Joel Best,
- Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists (2001), Chapter 3: "Mutant Statistics".
- Stat-Spotting: A Field Guide to Identifying Dubious Data (Updated & Expanded, 2013), p. 10.
- Allie is Gretchen's grandfather, and he brought the stuffing.
- Bill is Gretchen's cousin, and he brought the yams.
- Carla is Gretchen's niece, and she brought the mashed potatoes.
- David is Gretchen's son, and he brought the green beans.
- Eric is Gretchen's brother, and he brought the turkey.
- Frieda is Gretchen's aunt, and she brought the cranberry sauce.