WEBLOG

November 23rd, 2014 (Permalink)

Sanity Check it Out

It's time once again to check the "sanity" of a number that, in this case, is found on some websites. Here is a quote from one such site:

…[M]ore than four million women are battered to death by their husbands or boyfriends each year [in the United States].

Is this a plausible number? How would you go about checking it by using what you already know as opposed to doing research? In other words, check the plausibility of the claim rather than simply accepting it. To get the most benefit from this exercise, don't just evaluate the number for plausibility, but make a case for your evaluation. When you're done, click on the link below to see one such check:

Sanity Check

Previous Entries in this Series:


November 19th, 2014 (Permalink)

Poll Watch: A "new numerical low"

Gallup: 'New numerical low' for Obamacare

The above is a recent headline from a Politico article―see Source 1, below, and read the whole thing: it's short! What is a "new numerical low"? According to the article:

Support for Obamacare continues to decline, with the law hitting a new low in approval, and a new high in disapproval, as the second enrollment period has opened for Americans, according to Gallup. Just 37 percent approve of the Affordable Care Act, 1 percentage point less than the previous low recorded in January, Gallup found in a new survey released Monday. The pollster notes the approval results are a “new numerical low” for Obamacare. … A majority of Americans disapprove of Obamacare, at 56 percent―a new high, Gallup said.

If you're a savvy poll-watcher, the fact that the drop in the approval rating is only one percentage point should set off your internal alarm. A single percentage point is never a significant result in a public opinion survey, and most national polls have a margin of error (MoE) of plus-or-minus three percentage points. The largest such polls have a MoE of around plus-or-minus two percentage points, so that even in the largest polls this would not be a significant change. The article ends:

The Gallup poll was conducted Nov. 6-9 and surveyed 828 adults. It has a margin of error of plus or minus 4 percentage points.

So, a one-percentage-point drop is well within the MoE, and not a statistically significant result. This is presumably why Gallup, in its own article on the poll―see Source 2, below―referred to this as a "new numerical low", that is, to distinguish the drop from a significant result. Such a small change is not just statistically insignificant, it's not practically significant or "significant" in any other sense of the word. It's possible that this is the start of a downward trend in the approval rating, but it's just as possible that it's the result of statistical noise. The only way to tell will be to check future polls. As Gallup's article goes on to say: "…with approval holding in a fairly narrow range since last fall, it may be that Americans have fairly well made up their minds about the law…".

I've never heard the phrase "new numerical low" before, and a web search produces results either reporting on this poll or unrelated to polling. The word "numerical" in the phrase appears to play a similar role to the word "nominal" in discussing prices. A "nominal" price is one that hasn't been adjusted for the effects of inflation. Similarly, a "numerical low" appears to be one that doesn't take into account the MoE, thus treating an insignificant result as if it means something. A price that has been adjusted for inflation is called a "real" price, in contrast to a "nominal" one. We should make a similar distinction between "numerical" lows and "real" ones in polling results.

Sources:

  1. Lucy McCalmont, "Gallup: 'New numerical low' for Obamacare", Politico, 11/17/2014
  2. Justin McCarthy, "As New Enrollment Period Starts, ACA Approval at 37%", Gallup, 11/17/2014

Resource: How to Read a Poll


November 16th, 2014 (Permalink)

Check 'Em Out

Psychologist Barbara Drescher has two interesting articles on the Skeptic Society's new "blog" Insight that discuss the difference between intelligence and rationality―see the Sources, below. I have drawn a similar distinction between the intelligence spectrum from stupid to smart, and the "wisdom" continuum from foolish to wise. Smart people, such as the physicist whose story Drescher tells, can be foolish. This is not a contradiction though it might seem so, because intelligence is not the same thing as wisdom. As a result, we shouldn't assume that people who have foolish, irrational beliefs are, therefore, stupid.

My experience with Mensa was similar to Drescher's, except that I never actually joined. When I wrote to the organization and expressed interest in joining it sent me an envelope like that received by Drescher, which dampened my enthusiasm.

The second of Drescher's articles is the more important one because there she discusses what rationality is, and how it can be improved. Unfortunately, you probably can't do much to raise your native intelligence, but you can learn to be more rational by changing or improving your dispositions. Much foolishness results from lazy thinking and, while you may not be able to improve your native intelligence, you can become a less lazy thinker. Sometimes the slow but determined turtle beats the fast but lazy rabbit.

The problem about whether a married person is looking at an unmarried one that Drescher gives in the second article is one I've dealt with before, including two puzzles based on it―see the Resources, below.

Sources: Barbara Drescher,

Resources:


November 8th, 2014 (Permalink)

Listen Up!

There's a new Skeptoid podcast by Craig Good about how to read, watch, or listen to the news with appropriate skepticism―there's also a transcript in case you'd rather read it. Go read or listen to the whole thing―it's short!―then return here as I have a few additional comments and amplifications. See the Source, below. I'll wait.

Oh, you're back! What took so long? Anyway, here are my comments, keyed to some of Good's section headings:

Source: Craig Good, "A Skeptical Look at the News", Skeptoid, 11/4/2014

Resource: Check 'Em Out, 12/9/2006

Fallacies:

  1. Ad Hominem
  2. Emotional Appeal
  3. False Dilemma
  4. Genetic Fallacy

The Brain Teasers

October 30th, 2014 (Permalink)

A Hallowe'en Costume Contest Puzzle

The four finalists in the All-Hallows' Eve costume contest were traditional costumes:

  1. There was a vampire who looked much like Count Dracula;
  2. A Frankenstein's monster, though more in the Hammer mode than the Karloff;
  3. A person wrapped in white gauze who was either supposed to be a bloody mummy or the victim of a terrible accident;
  4. And, finally, a werewolf-woman.

Three friends who had come to see the contest discussed the finalists:

Alice: I don't know who's going to win, but it sure won't be that pathetic Dracula!

Bob: I think either the mummy or the werewolf will win.

Carol: You're both nuts! It's definitely going to be either the vampire or the Frankenstein monster.

In the event, it turned out that only one of the three friends was right. Which costume won the contest?

Solution


October 29th, 2014 (Permalink)

Don't nudge, judge!

This is the fourth and, I expect, last entry on Steven Poole's "Not So Foolish" article―see the Source, below―lest I end up writing more words than are in the article itself. For the three previous entries, see the links below. In this entry, I want to finally discuss what Poole has to say about "nudging", which was a large part of my original motivation for focusing on this article.

Several years ago I wrote a "Book Club" series on Richard Thaler and Cass Sunstein's book Nudge―see links to the entries, below. The book was in the news at the time because Sunstein was an adviser to candidate, then President, Obama and later served for awhile in the administration. For this reason, it was thought that "nudging" was the coming thing, and that we would all be getting elbowed in the ribs by the government during the Obama administration. However, in the intervening six years I've seen little if any evidence of "nudging" done by the federal government. Obamacare, the centerpiece achievement of the current president, doesn't seem very nudgy: it has an individual mandate, not an individual nudge. While certainly paternalistic, Obamacare is not exactly libertarian, though it's more libertarian than a national health service would be.

Now, Sunstein has a new book titled Why Nudge?, based on a lecture series he gave a couple of years ago. One of my disappointments with the first book was its failure, despite having a penultimate chapter dealing with objections to nudging, to deal with what seemed to me to be some obvious counter-arguments. So, I'm curious whether Sunstein, in the new book, will finally get around to addressing these doubts, or at least acknowledging their existence.

Poole's article pleases me partly because he raises some of the same objections as I had, and does so more forcefully. Unfortunately, it arrives too late to nudge Sunstein into addressing these legitimate objections in his new book, but at least it may give greater publicity to them. Here are the main unanswered objections:

I don't want to suggest that Sunstein has no answers to these objections; rather, I'm asserting that up to now he simply has not addressed them at all. Perhaps the new book will remedy this defect.

I'll give Poole the last word:

…[E]ven if we each acted as irrationally as often as the most pessimistic picture implies, that would be no cause to flatten democratic deliberation into the weighted engineering of consumer choices, as nudge politics seeks to do. On the contrary, public reason is our best hope for survival. Even a reasoned argument to the effect that human rationality is fatally compromised is itself an exercise in rationality. Albeit rather a perverse, and…ultimately self-defeating one.

Source: Steven Poole, "Not So Foolish", Aeon Magazine, 9/22/2014

Previous Entries in this Series:

  1. Wink-Wink, Nudge-Nudge, 9/29/2014
  2. The "Linda Problem" Problem, 10/2/2014
  3. The Great Pumpkin, 10/13/2014

Book Club: Nudge:

Update (11/5/2014): The New York Review of Books has a review of Sunstein's new book by Jeremy Waldron, together with a letter to the editor from Sunstein in response and Waldron's answer―see the Sources, below. According to the review, Sunstein addresses the final of the three concerns that I mentioned above, but Waldron is unimpressed:

Sunstein does acknowledge that people might feel infantilized by being nudged. He says that “people should not be regarded as children; they should be treated with respect.” But saying that is not enough. We actually have to reconcile nudging with a steadfast commitment to self-respect.

Of course, I haven't read the new book yet, so I don't know how convincing it is. However, taken together with Poole's article, this review shows that the three concerns that I raised in the original book club entries should not be ignored or simply brushed aside. These same misgivings have been raised independently by at least three different readers of Sunstein's writings on nudging. Waldron also draws an important connection between two of the problems:

Consider the…heuristics―the rules for behavior that we habitually follow. Nudging doesn’t teach me not to use inappropriate heuristics or to abandon irrational intuitions or outdated rules of thumb. It does not try to educate my choosing…. Instead it builds on my foibles. It manipulates my sense of the situation so that some heuristic―for example, a lazy feeling that I don’t need to think about saving for retirement―which is in principle inappropriate for the choice that I face, will still, thanks to a nudge, yield the answer that rational reflection would yield. Instead of teaching me to think actively about retirement, it takes advantage of my inertia. Instead of teaching me not to automatically choose the first item on the menu, it moves the objectively desirable items up to first place. I still use the same defective strategies but now things have been arranged to make that work out better. Nudging takes advantage of my deficiencies in the way one indulges a child.

In other words, in addition to treating us as children, nudging instead of debiasing is likely to infantilize us, making it less likely that we will ever grow up. As Waldron writes at the end of the review: "I wish…that I could be made a better chooser rather than having someone on high take advantage (even for my own benefit) of my current thoughtlessness and my shabby intuitions."

I have one nitpick with the review: when he discusses drunk driving, Waldron doesn't seem to understand the notion of an order of magnitude difference: "…[I]n 2010, the number of people who were killed in alcohol-impaired driving crashes…was an order of magnitude lower than that, i.e., almost one ten thousandth of the number of incidents of DWI." This greatly understates the difference, since one ten-thousandth is actually four orders of magnitude less, because 10,000 = 104.

Sources:


October 28th, 2014 (Permalink)

Hallowe'en Headline

Grieving Parents Shocked When Dead Son Answers The Door

"The paw!…The monkey's paw!"

Sources:


October 22nd, 2014 (Permalink)

Check Your Sanity!

A keen sense of proportion is the first skill―and the one most bizarrely neglected―for seeing through numbers. Fortunately, everyone can do it. Often all they have to do is think for themselves.―Michael Blastland & Andrew Dilnot, The Numbers Game

It's time once again to get out your old envelopes or used cocktail napkins, and go figure! A valuable critical thinking skill is to be able to check numerical claims made in the media for plausibility, a process which is sometimes called a "sanity check". You might be surprised at how many won't even stand up to such a check, for the journalists who write media stories are often innumerate or number-phobic. For this reason, don't assume that reporters will check the numbers for you; we've seen many examples in the past―see the Resources, below―where they did not. Also, don't be afraid of the math! To do a sanity check seldom requires any advanced mathematics, and definitely does not require any in this case: if you can add, subtract, multiply, and divide, you can check the sanity of the following number.

The best way to learn how to do such number checks is to see how it's done and work a few yourself. As mentioned, you won't need to learn any new math; you'll just need to learn how to apply what you already know. Here's the number that we want to check for sanity, taken from Michael Blastland and Andrew Dilnot's excellent book The Numbers Game: "In 1997 the British Labour government said it would spend an extra £300 million (about $600 million) over five years to create a million new child care places."

Is this number sane? The question we want to answer is: Is $600 million over five years a large enough amount of money to create a million new child care places, that is, is that amount of funding for day care centers and other child care providers enough to lead to an additional million spots being available for parents who need day care?

How can you answer this question using simple math and what you already know? Most of us do not have experience with large numbers that would allow us to put them into perspective; we lack an intuitive feel for the difference between millions, billions, and trillions. So, one approach to dealing with large numbers is to use math to translate them into smaller numbers that we do have a feel for. Give it a shot, then when you're ready click on "Sanity Check" below to see one way to do it:

Sanity Check

Source: Michael Blastland & Andrew Dilnot, The Numbers Game: The Commonsense Guide to Understanding Numbers in the News, in Politics, and in Life (2009), pp. 12-15

Resources:


October 21st, 2014 (Permalink)

New Book: Arguing with People

Arguing with People is a new book by philosopher Michael A. Gilbert, also author of How to Win an Argument: Surefire Strategies for Getting Your Point Across. As is suggested by the subtitle of the latter, both appear to focus on practical argumentation.


October 13th, 2014 (Permalink)

The Great Pumpkin

This is the third entry on Steven Poole's "Not So Foolish" article―see the Source, below, and the Resources for the two previous entries. The section of the article that I want to comment on is short, so I suggest reading or re-reading it, but here are the most important parts:

One interesting consequence of a wider definition of ‘rationality’ is that it might make it harder to convict those who disagree with us of stupidity. …Dan M Kahan, a professor of law and psychology, argues that people who reject the established facts about global warming and instead adopt the opinions of their peer group are being perfectly rational in a certain light:
Nothing any ordinary member of the public personally believes about […] global warming will affect the risk that climate changes [sic] poses to her, or to anyone or anything she cares about. […] However, if she forms the wrong position on climate change relative to the one [shared by] people with whom she has a close affinity―and on whose high regard and support she depends on [sic] in myriad ways in her daily life―she could suffer extremely unpleasant consequences, from shunning to the loss of employment. Because the cost to her of making a mistake on the science is zero and the cost of being out of synch with her peers potentially catastrophic, it is indeed individually rational for her to attend to information on climate change in a manner geared to conforming her position to that of others in her cultural group.

I've omitted a paragraph comparing this situation to "the tragedy of the commons", which is a useful exercise if you're familiar with the "tragedy" but probably not otherwise. Also, while that analogy draws a distinction between individual and group rationality, that's not the distinction that I want to call attention to. I think Poole―and possibly Kahan, as well―is confusing two distinct types of individual rationality:

  1. The rationality of a belief: If the woman in the example comes to her belief in the wrong way, then her belief may be irrational. For instance, suppose that she thinks to herself: "My peers all disbelieve in climate change, and they would shun me if I believed in it, therefore I won't believe in it." In other words, the woman forms her disbelief using the appeal to consequences―see the Fallacy, below. The belief itself is not based on adequate or relevant evidence, therefore it is irrational.
  2. The rationality of believing: In contrast, despite the fact that the belief itself is irrational, it may still be rational for the woman to believe it, at least in the narrow, economic sense of "rational".

Perhaps a different example will make the distinction clearer: Let's take as our example of an irrational belief the existence of The Great Pumpkin. Presumably, we can all agree that there is not sufficient credible evidence of its existence, such that to believe in The Great Pumpkin is to believe in something irrational, that is, such a belief is irrational in sense 1, above.

Now, suppose that a rich man were to offer you a million dollars if you believe in The Great Pumpkin. Put aside the objection that you can't believe something by an act of will, which may be true but is beside the point. Also, to answer the objection that no one can see inside your head to tell if you really believe something, let's suppose that the rich man has a "psychoscope" that allows him to read your beliefs. So, in order to get the million dollars, you must really believe in The Great Pumpkin.

Clearly, it would be rational for you to believe in The Great Pumpkin, given that you stand to gain a million dollars and lose very little by so believing. Perhaps you'd be rather embarrassed by believing it, but a million dollars ought to help make up for that: you can cry all the way to the bank, as someone once said. So, it is rational for you to believe in The Great Pumpkin, but the belief itself is still irrational.

In other words, there are at least two different ways that a belief can be rational, and these ways may conflict. In the first way, it is rational to hold the belief because the belief itself―that is, the proposition that is believed―is supported by sufficient evidence. In the second way, it is rational to believe the proposition because of the good or bad consequences that may follow from believing or disbelieving it.

Now, there is no guarantee that these two types of rationality will always go together. There is, of course, no problem when one believes a proposition that is supported by appropriate evidence and expects good consequences from doing so. Similarly, we would all surely condemn believing a proposition for which one has no good evidence, neither expecting good nor fearing bad consequences of believing it.

What we have to wonder about, then, are when these types of rationality misalign: Believing something for which one has inadequate evidence because of either good consequences one expects from believing, or bad consequences one fears from disbelieving. This is what's happening in the case of The Great Pumpkin, and that of the woman who doesn't believe in climate change. We needn't dwell on the case of those who persist in believing something for which they have good evidence in the face of threats of bad consequences or offers of rewards for not believing―they have their reward.

To return to Poole, I'm certainly on his side in thinking that we shouldn't be so quick to accuse those we disagree with of stupidity. For one thing, much of the time this is simply incorrect. It may indeed be rational, in the second sense discussed above, for the woman in Poole's example to believe what her peer group believes, or at least "to attend to information…in a manner geared to conforming her position to that of others in her cultural group". However, as far as I can see, this is no objection to the psychological work that Poole is criticizing. Instead, Poole is describing a particular way that people come to hold irrational beliefs, namely, it can be economically rational to believe a proposition that is poorly supported by evidence.

Source: Steven Poole, "Not So Foolish", Aeon Magazine, 9/22/2014

Resources:


October 8th, 2014 (Permalink)

Wikipedia Watch

Here's a follow-up to last month's entry on Neil DeGrasse Tyson's contextomy of President Bush―see the Resource, below. Wikipediocracy has an instructive article about the controversy on Wikipedia as to whether its biography of Tyson should mention the quote controversy―see the Source, below. Check it out.

As I write this, Wikipedia's biography of Tyson contains no reference at all to the Bush contextomy. One could justify this on the basis that the issue is too unimportant to include in a biography, which is indeed how some of its "editors" have argued against its inclusion on the article's "Talk" page. However, the article manages to mention that Tyson has been portrayed in an issue of Action Comics, as well as having appeared in an episode of The Big Bang Theory. It even notes Tyson's appearance as keynote speaker at "The Amazing Meeting", but without noticing the controversy that resulted from his speech. So, if this is any indication of Wikipedia's standards of significance, it's rather hard to believe that the contextomy just isn't important enough to merit, say, a short paragraph.

However, there's the additional objection that the "editors" are apparently unable to find reliable evidence that a controversy even exists, let alone that Bush was quoted out of context. This, despite the fact that Tyson has now admitted and apologized for misquoting Bush. Apparently, even Tyson himself isn't a sufficiently reliable source about what he said, since he only self-published his apology on his Facebook page! In contrast, we learn from the biography the important fact that Tyson won a gold medal in ballroom dancing while in college, despite there being no source at all cited. By the way, I'm available for remedial instruction in researching and evaluating evidence.

The "Talk" page debate suggests that Wikipedia is in danger of turning into, if it hasn't already become, a kind of Tower of Babel. While this entire issue could have been dealt with in a few sentences, and probably less than a hundred words in the article itself, thousands of words have been exchanged on the "Talk" page arguing back and forth about whether to include those hundred words. All of which suggests that if you must check Wikipedia, then you should also check the "Talk" page of any article you read to see how the sausage got made. You may be amazed by both what was put in and what was left out.

Source: Hersch, with research assistance from Eric Barbour & Andreas Kolbe, "'Our Wikipedia is the Wikipedia who defamed the stars'", Wikipediocracy, 10/5/2014

Resource: The Bushisms Strike Back!, 9/21/2014

Previous Wikipedia Watches: 6/30/2008, 10/22/2008, 1/25/2009, 3/22/2009, 5/16/2009, 7/21/2009, 1/9/2013, 10/24/2013, 5/2/2014, 7/21/2014


October 2nd, 2014 (Permalink)

The "Linda Problem" Problem

This is a sequel to the previous entry on Steven Poole's "Not So Foolish" article―see the end of that entry for a link. Instead of tacking it on as an "update", I've decided to make it a separate entry, since the famous "Linda problem" has interest in its own right.

As part of his criticism of skepticism about human reason, Poole critiques the well-known "Linda problem". His argumentative strategy here seems to be to undermine the claims of cognitive psychologists to have found faults in most people's thinking. In the case of the Linda problem, he argues that the answer often given by people to the problem, which is judged to be erroneous by psychologists, may actually be rational after all.

Poole introduces his criticism of the Linda problem by associating it with a definition of rationality from economics. Now, I think there are good reasons to be doubtful about the economic definition of rationality, which certainly seems too narrow if nothing worse. However, it doesn't appear that the Linda problem―or other pieces of evidence for cognitive errors involving judgments of probability―has anything specifically to do with economic "rationality". It's true that the work of psychologists on cognitive biases has been used to cast doubt on whether people are rational in the narrow economic sense, but the Linda problem suggests that people judge probabilities in ways that conflict with probability theory, which is a much deeper failure of rationality.

Poole's criticism of the psychologists' interpretation of the Linda problem is in terms of conversational implications of the way the original problem was worded. "Conversational" implications are implications of the fact that something was said, or the way it was said, or in this case of something that was not said, as opposed to implications of what was said. So, in the Linda problem to say that Linda is a bank teller and to say nothing else may be taken as implying that she's not a feminist.

This is a familiar objection, as it was discussed in Massimo Piattelli-Palmarini's book Inevitable Illusions from twenty years ago. For this reason, I won't discuss the objection in detail, as you can read Piattelli-Palmarini's book if you're interested in the details. Anyway, the conjunction "fallacy" is still a mistake in probability theory, and at worst this objection shows that the "fallacy" may be a less common one than the experiments seem to show. It may even be the case that the mistake is not common enough to merit the term "fallacy".

So, let's grant Poole his point about the Linda problem and assume that few if any people actually commit the conjunction fallacy. Even so, the Linda problem is only one small, albeit well-known, piece of evidence in favor of the cognitive psychologists' conclusions, and the conjunction "fallacy" is just one of many supposed cognitive biases and illusions. So, even if we accept Poole's argument on this point, it goes only a small way in undermining the psychologists' case.

Source: Massimo Piattelli-Palmarini, Inevitable Illusions: How Mistakes of Reason Rule Our Minds (1994), pp. 67-68.

Fallacy: The Conjunction Fallacy

Previous Entry

The gambler's fallacy extends to the financial and investment markets. There is simply nothing like a fail proof investment or "get rich scheme" in the new trend of binary options. Although there are now serious binary option providers in Germany, such as BDSwiss which is EU regulated, prospective investors are cautioned to have realistic expectations to avoid the investor's fallacy.

College students visit Ssee2011conference.com for free essay writing help.
Graduate students use Smart Montgomery Tips for thesis and dissertation writing help.

Online dissertation services such as MastersThesisWriting may help with your thesis or dissertation.