Previous Month | RSS/XML | Current | Next Month

March 30th, 2009 (Permalink)

Who is Homeless?

How do you count the homeless? Obviously, you can't do the usual social science phone surveys that call landlines. Moreover, even if you send people out on the streets to count the homeless, some homeless people may avoid them.

However, even if you can overcome these practical problems, there's a more difficult theoretical problem: you can't count something without defining it. It might seem easy to define the homeless―a homeless person is someone lacking a home―but what constitutes a "home"? If you rent an apartment, you're not homeless, even though you don't own the building that you live in. What about people staying with friends or relatives? Is an elderly person who lives with a relative homeless? What about people living in temporary shelters while their flooded homes are repaired or rebuilt?

So, "homeless" is a vague and ambiguous term. Also, it's a politically-charged word, and some politicians are motivated to try to play down the amount of homelessness, while others wish to exaggerate it. As a result, studies of homelessness are often done by interest groups with an interest in redefining "homeless" in order to either diminish or exaggerate the numbers. Here's an example taken from a question from a reporter during the President's news conference last week:

Kevin Chappell, Ebony: A recent report found that, as a result of the economic downturn, 1 in 50 children are now homeless in America. With shelters at full capacity, tent cities are sprouting up across the country. In passing your stimulus package, you said that help was on the way. But what would you say to these families, especially children, who are sleeping under bridges and in tents across the country?

Here's Annenberg Political Fact Check's analysis of this study:

…NCFH [National Center on Family Homelessness] has a broader definition for homelessness than Chappell implies. Its definition isn't restricted to children who are "sleeping under bridges and in tents." It also includes children whose families are staying with friends or family members, in hotels and motels, in trailer parks or in "substandard" housing.

No doubt substandard housing is a serious problem, but it's probably not what most people think of as homelessness. There's a long history of similar exaggerations of the number of the homeless; for instance, Mitch Snyder in the 1980s estimated that there were as many as three million homeless in the U.S., but later admitted that the number was invented to satisfy importunate journalists.

That Snyder was not exactly trustworthy on the subject of statistics could be seen in another of his claims, namely, that 45 homeless people died each second. This is a good candidate for a "back of the envelope" calculation to test its plausibility:

  1. If this were true, how many homeless would die in a year?
  2. Suppose that Snyder misspoke and meant to say that 45 died every minute: how many would die in a year?
  3. Suppose that he meant that one homeless person died every 45 seconds: how many would die in a year?

See the Back of the Envelope


March 29th, 2009 (Permalink)

A Legacy of Toxic Doublespeak

One type of doublespeak is the euphemism, which replaces an old word that has undesirable associations with a new one that lacks them. Here are some more examples of current economic euphemisms for uncollectable debt and unaffordable obligations.

Oldspeak Newspeak
Toxic asset Legacy asset
Debt Legacy costs
Subprime loan Nonprime loan

Source: Daniel Gross, "Bubblespeak", Slate, 3/28/2009

March 26th, 2009 (Permalink)

Check it Out

Nicholas Kristof's latest column is on the limits of expertise in economics and politics. It should be kept in mind that there is less room for expertise in the areas he's discussing than in other areas, such as medicine or logic, say. Moreover, what expertise there is in economics or foreign policy does not seem to translate into the ability to predict recessions or the outcomes of wars. Some types of expertise―such as that in astronomy―do allow for prediction of relevant types of event―such as eclipses. But human conduct is inherently more unpredictable than the movements of inanimate objects.

The book that he mentions by Tetlock specifically concerns political expertise, and it seems to show that knowledge of politics does not produce an ability to predict the political future. Moreover:

The more famous experts did worse than unknown ones. That had to do with a fault in the media. Talent bookers for television shows and reporters tended to call up experts who provided strong, coherent points of view, who saw things in blacks and whites.

What seems to be missing among the experts themselves, as well as among consumers of their expertise, is an appreciation of the limits of their knowledge. In other words, they're overconfident. Contrast this with weather forecasting: meteorologists can predict tomorrow's weather fairly precisely, but next week only in general terms. Most importantly, the forecasters themselves do not pretend to be able to predict what the weather's going to be a week, a month, or a year in the future. I suspect that much the same is true for economic and political prognostication, but the pundits―at the least the shouting heads on television―don't realize that beyond the near future they're just guessing, like the rest of us.

Source: Nicholas D. Kristof, "Learning How to Think", The New York Times, 3/26/2009


March 24th, 2009 (Permalink)

What's New and Improved?

I've finally finished the renovation of the Taxonomy. Hopefully, the new version is easier to understand and use than the old one. Given the size and complexity of the changes involved, it's almost unavoidable that there are problems with the new one. If you notice any mistakes in the new Taxonomy, please let me know.

Source: The Taxonomy of Logical Fallacies

March 22nd, 2009 (Permalink)

Wikipedia Watch

Here's one of the examples from Wikipedia's entry on the quantifier shift fallacy:

Every person has a woman that is their mother. Therefore, there is a woman that is the mother of every person.

"x$y(Px → (Wy & M(yx))
therefore $y"x(Wy → (Px & M(yx))

It is fallacious to conclude that there is a single woman who is the mother of all people. However, if the major premise ("every person has a woman that is their mother") is assumed to be true, then it is valid to conclude that there is some woman who is any given person's mother.

There are three mistakes in the formulas given in the example:

  1. There are too few parentheses in the formulas. Parentheses always come in pairs, so there have to be an even number in each formula. However, these formulas have five apiece: three left parentheses and two right ones. An additional closing right parenthesis is needed in each formula.
  2. More importantly, the second formula is an incorrect translation of "there is a woman that is the mother of every person". The formula given is logically very weak and is true rather than false. In English, it says: "Either something is not a woman, or something is such that everything is a person and that something is its mother"! The first disjunct―"something is not a woman"―is of course true―I'm not a woman―so the entire disjunction is true, even though the second disjunct is wildly false. A correct translation is:
    $y"x(Wy & (Px → M(yx))).
  3. The formulas don't illustrate the fallacy of quantifier shift well since, even when corrected, the parts after the shifted quantifiers are not the same. The easiest way to fix this is to take advantage of the fact that a mother is necessarily a woman and eliminate the explicit reference to a woman. So, the corrected example would look like this:
    Every person has a mother. Therefore, there is a mother of every person.

    "x$y(Px → M(yx))
    therefore $y"x(Px → M(yx))

    This, of course, has the advantage of simplicity over the current version.

These mistakes are either beginner's errors or the result of ignorant editing. It's another example of why it's important to have actual experts write encyclopedia articles on technical subjects.


March 21st, 2009 (Permalink)


Q: I like to consider myself a fallacy hound, but I am not sure I know the proper term for this fallacy: "If X is good then more X is better." It would seem common―if a fire hose is good for a fire, then a tsunami is even better! If one aspirin is good for a headache, then 100 aspirins are even better. If one million in bail out funds will help the economy, then ten million will help it even more―perhaps ten times as much! I am wondering if this is a blend of fallacies since it depends on a scalar distribution of "good" or "useful" or whatever the positive quality is. At first, I thought it could be an a fortiori fallacy. Most definitions such as yours point to a comparison such as the example on your page or noting that since steroids built muscles that helped Barry Bonds hit a home run, then Hulk Hogan should be able to hit even more home runs. Is there a formal designation?
―J. D.

A: I can't recall ever coming across this sort of reasoning as a named logical fallacy; if it has a name, perhaps a reader will enlighten us.

Suppose that the following is a rule of thumb: if something is good (bad), more of it is better (worse). There are obvious and well-known exceptions to the rule, such as the fact that increasing the dosage of a medicine may make you sicker, rather than better faster. Many people have the experience as children of eating so much candy that they get tummyaches. Nevertheless, the rule may hold true often enough to be useful, and that together with the exceptions make it a rule of thumb.

If this is a good rule of thumb, then it's not fallacious to apply it to normal cases. Rather, a fallacy is committed only when applying the rule to exceptions, thus treating it as if it were a universal generalization rather than a heuristic. For this reason, it would be a subfallacy of sweeping generalization.

It occurs to me that the converse rule also plays a role in people's thinking: if something is good (bad), then less of it is less good (bad), but still good (bad). For instance, there's a tendency for people to think that if something is a carcinogen in large doses, it must continue to be a carcinogen in smaller doses―even trace amounts. Perhaps the rule holds true in this case, but there are exceptions: some substances are poisons in high doses but nutrients in smaller ones.

These two rules can be combined into a single one: a certain amount of something is good (bad) if and only if a greater amount is better (worse). I don't know whether this is a good heuristic, but it seems plausible. It certainly seems doubtful that doubling or tripling the amount of something generally doubles or triples its value. Nonetheless, at least within limited ranges, it's likely that the rule holds good of many things. For instance, more money is usually better than less.

"A Fortiori Fallacy" would not be a good name for this type of mistake since the Latin phrase "a fortiori" is used to refer to a non-fallacious form of argument. I can't think of a better one, and at this point it's premature to name it, since it's not clear whether it's a common enough mistake to warrant treatment as a separate fallacy. But it's certainly worth thinking about.

March 17th, 2009 (Permalink)

Doublespeak Dictionary

March 14th, 2009 (Permalink)

Critical Thinking Puzzle

A reader wrote recently to ask about the Watson/Glaser test of critical thinking. Here's a puzzle based on a type of question from that test.

Instructions: Below, you are given two premisses and three conclusions. Determine whether the arguments from the premisses to each conclusion are valid or invalid. For extra credit, identify the fallacies committed by the invalid arguments.

Premisses: All logic puzzles are logic games. Some logic puzzles are critical thinking puzzles.


  1. Some critical thinking puzzles are not logic games.
  2. No logic games fail to be logic puzzles.
  3. Some critical thinking puzzles are logic games.


March 12th, 2009 (Permalink)

Cognitive Biases and the Economy

James Pethokoukis has a short article in U.S. News and World Report on how cognitive biases may affect the Obama administration's economic policies. He discusses anchoring, overconfidence, and wishful thinking, among others. Here're a couple of additional biases that he doesn't discuss, but which are also relevant:

One of the lessons of behavioral economics is that we know less about why the economy does what it does than we often think we do, and that we have less power to control what it does than we wish we did.

Source: James Pethokoukis, "Putting Obama on the Couch", Capital Commerce, 3/11/2009

Resource: Gary Belsky & Thomas Gilovich, Why Smart People Make Big Money Mistakes―And How to Correct Them: Lessons from the New Science of Behavioral Economics (1999)

March 8th, 2009 (Permalink)

Where's the Harm?

From a tax court decision:

"In 1991, [the gambler] won approximately $26,660,000 from the California lottery…. [He] elected to receive payment of the lottery proceeds in 20 annual payments of approximately $1,333,000 each…. Since winning the lottery proceeds, [he] has not been employed.

"[He] gambled infrequently before winning the lottery. After [1996], [he] started playing the slot machines at the casinos frequently…. [He] spent most of his waking hours at the casinos. He had no outside interests, and generally if he was not at the casinos he was at home. A typical day for [him] generally consisted of waking up, showering, going to a 7-Eleven, getting coffee, going to the casinos, gambling, returning home, sleeping, waking up, and returning to the casino immediately thereafter. Occasionally, [he] spent up to 48 hours continuously in the casinos before returning home. [He] spent an average of 20 days per month at the casinos….

"On those days when he was at the casinos, [he] spent 8 to 48 hours continuously in the casinos, averaging approximately 10 hours per day. While at the casinos, [he] exclusively wagered on slot machines…. On the rare occasions when he left the casino with any money, [he] would bring the money back to the casino the following day, and he would then gamble with, and eventually lose (either the next day or shortly thereafter), that money.

"[He] did not get emotionally excited when he won at the slot machines. [He] did not get excited when he won jackpots of $1,200 or greater because the slot machine would freeze or lock up until he was issued his slot machine winnings…by the casino. Furthermore, [he] knew that eventually he would lose any winnings playing the slot machines.

"[He] lived with his girlfriend…. [She] went with [him] to the casinos and watched him gamble away his money. While watching [him] gamble, [she] saw that he did not get excited and did not enjoy playing the slot machines. … In or around 2003, [she] ended her relationship with [him] as he was never home because of his pathological gambling disorder. After she moved out of [his] home, he did not notice that she was gone until 2 or 3 days later." (PP. 4-11)

Why did he behave in this mind-boggling and tragic way? According to a psychologist called as an expert witness:

"…[The expert witness] testified that, unlike recreational and problem gamblers, pathological gamblers take the 'gamblerís fallacy' to a delusional level―they believe if they gamble long enough, they will win back all their losses and even more. [The expert witness] also opined that, unless treated for his illness, [the gambler] will gamble until he dies or loses all his money." (P. 26)

Here's a note of warning for future litigants:

"Respondent attempted to discredit [the expert witness] by claiming her definition of 'gamblerís fallacy' was incorrect. Respondent relies on a definition of 'gamblerís fallacy' he obtained from Wikipedia. Respondent did not call any witness, or expert witness, to counter [the expert witness]ís conclusions. Respondentís reliance on a definition of 'gamblerís fallacy' found in Wikipedia is not persuasive. [The expert witness]…credibly explained that there is a difference in the definition of 'gamblerís fallacy' depending on the field of study―e.g., psychology versus mathematics. We find [the expert witness] to be credible and rely on her expert opinion." (P. 26)

The Respondent should've cited The Fallacy Files, instead! If there really is a difference in the definition of "the gambler's fallacy" in psychology as opposed to math, I haven't heard of it, but I'm no psychologist. The Wikipedia entry for the fallacy at the time of the decision is actually reasonably accurate, but the judge couldn't be expected to know that, which is why he was right not to rely on it.

Source: "Gagliardi v. Commissioner of Internal Revenue", United States Tax Court, 1/24/2008 (PDF)

Via: Eugene Volokh, "More Wikipedia Law", The Volokh Conspiracy, 2/11/2009

March 6th, 2009 (Permalink)

Name that Fallacy!

"Teacher Rhian Gwyn had spent six years trying for a baby. She spent thousands of pounds on fertility treatment but nothing seemed to work. Rhian suffered a miscarriage a few years into her marriage to Rolant, and the couple had almost given up on starting a family. Four years after the miscarriage the couple continued trying for a baby and sought advice from their GP, only to be told there was nothing medically wrong with them. … It was at that point that Rhian stumbled upon the idea of acupuncture after reading articles on the subject. Believing this was her last chance, she started to attend acupuncture sessions at the Natural Health and Fertility Clinic in Whitchurch, Cardiff, under the care of Jackie Brown. Within six months Rhian became pregnant, and she is convinced that it was the whole experience of the acupuncture, conducted by wonderful staff in an understanding environment, which led to the birth of baby Macsen nine months ago. … 'If we wanted more children I would definitely have acupuncture again.'"


Source: Gregory Tindle, "Desperate Couple have Miracle Baby after Acupuncture", Wales Online, 2/12/2009

Via: Bob Carroll, "What's New?", The Skeptic's Dictionary Newsletter, Volume 8, No. 3, 3/1/2009. Not online yet.

March 5th, 2009 (Permalink)

Puzzle it Out

John Tierney, of the "Tierney Lab" weblog, is holding a puzzle contest, and the winner will receive a copy of Marcel Danesi's new book The Total Brain Workout. One of the two puzzles you must solve is a version of the well-known "Cannibals and Missionaries" problem, and the other is a similar river-crossing puzzle. I don't see a deadline for the contest, so you'd better hurry. Read further if and only if you want a hint on how to solve these puzzles.

Hint: When going somewhere, it's a good rule of thumb to always try to make progress towards your destination. However, each of these puzzles requires some back-tracking to solve, which is what makes them puzzling.

Source: John Tierney, "A Prize for Solving Charlemagneís Puzzle", Tierney Lab, 3/3/2009

March 2nd, 2009 (Permalink)

New Book

Joel Best, author of the excellent Damned Lies and Statistics (see the Resource below for a review) and More Damned Lies and Statistics, has a new book out called Stat-Spotting: A Field Guide to Identifying Dubious Data. The latest issue of The Skeptical Inquirer―which is now on the news stands but not yet online―reviews it. The review makes it sound like a rehash of his first book, but I hope that it's more. I also hope that I receive a review copy.


Resource: Book Review: Damned Lies and Statistics, Fallacy Files Book Shelf

March 1st, 2009 (Permalink)

Check it Out

Ben Goldacre's latest "Bad Science" column concerns data mining and the base rate fallacy.

Source: Ben Goldacre, "Spying on 60 Million People Doesn't Add Up", The Guardian, 2/28/2009

Resource: Bruce Schneier, "Why Data Mining Won't Stop Terror", Wired, 3/9/2006


  1. Invalid. Fallacy: Some Are/Some are Not
  2. Invalid. Fallacy: This conclusion is equivalent to "all logic games are logic puzzles", so it illicitly converts the first premiss.
  3. Valid.

The Back of the Envelope:

  1. Approximately one and a half billion.
  2. Around 23 and a half million.
  3. About 700,000.

Previous Month | RSS/XML | Current | Next Month