January 25, 2015 (Permalink)
A Mutant Statistic
…[W]hen statistics seem incredible, when we find ourselves wondering whether things can possibly be that bad, it can be worth trying to figure out how a number was brought into being. Sometimes we can discover that the numbers just don't add up….―Joel Best
An article about some states requiring that high school students pass a test for United States citizenship in order to graduate―see Source 2, below―makes the following claim about the amount of time spent in public schools on civics:
The Center for Education Policy conducted surveys of school districts across the nation that showed the time spent on [civics and social studies] subjects in elementary schools was reduced from an average of 2,239 minutes per week in 2000 to 164 minutes per week in 2008, a 93 percent decrease.
Horrors! I didn't realize that things were so bad. But wait a minute: are they really that bad? Is it plausible that the amount of time devoted by elementary schools decreased so precipitately in eight years? Don't just assume that because you see it in black on white on your computer screen that it must be true. How would you go about testing these numbers for plausibility using what you already know? When you think you know the answer, click on "Sanity Check", below, to see one such test.
- Joel Best, Stat-Spotting: A Field Guide to Identifying Dubious Data (2008), pp. 24-25
- Rod Kackley, "Is Making High School Students Pass Citizenship Test the Right Move?", PJ Media, 1/22/2015
January 23, 2015 (Permalink)
Blurb Watch: Selma
If you wondered whether blurbs for the movie Selma would take the kind of liberties with the reviews they quote that the movie itself does with history, here's one answer. A newspaper ad for the movie included the following quote:
"In impact and import, 'SELMA' IS THE FILM OF THE YEAR."
RICHARD CORLISS, TIME
Here's the context of the quote from Corliss' review:
If not quite in quality then certainly in import and impact, this is the film of the year―of 1965 and perhaps of 2014.
I understand why "perhaps" was left out of the blurb, together with the part about how it's "not quite" the movie of the year "in quality", but why were the words "impact" and "import" switched? The ways of the blurber―blurbist?―are strange.
- Ad for Selma, The New York Times, 1/23/2015, p. C15
- "Movie Blurbs, The Inside Story", Inside Edition, 11/25/2013. From a little over a year ago, a general article on how good blurbs happen to bad movies.
- Richard Corliss, "Review: Selma Is the Film of the Year―But 1965 or 2014?", Time, 1/1/2015.
- Ron Radosh, "The Truth, History, and the Movie Selma", PJ Media, 1/12/2015.
- Mark K. Updegrove, "What ‘Selma’ Gets Wrong: LBJ and MLK were close partners in reform", Politico Magazine, 12/22/2014.
January 17, 2015 (Permalink)
A Not So Crystal-Clear Graph
Can you see anything wrong with the chart shown? If you compare the two rocks of crystal meth pictured, the bigger one appears to be several times the size of the smaller. However, if you pay attention to the numbers superimposed on the rocks, the larger percentage is only a little more than twice the size of the smaller. What's going on here?
Though it might not look like it, this appears to be a "one-dimensional pictograph", to use Darrell Huff's phrase―see the Resource, below―which is a bar chart that substitutes pictures for bars. The rocks of meth spread out so much that they might not appear to be standing in for the bars of a bar graph, but if you measure each rock from its bottom to its top you'll see that the larger rock is slightly more than twice as tall as the smaller one. Thus, it's my guess that the graph-maker started out with a bar chart, but wanted to make it more visually striking by replacing the bars with the rocks.
A further problem comes from the use of tenths of a percent on the larger rock. Was the study in question really that precise about what percentage of HIV-positive men had tried crystal meth? Such precision is unheard of in what must be a survey of homosexual men in New York City.
In this case, a picture is not really necessary, since it's easy enough to compare two percentages, but the picture is actually misleading in a way that the numbers alone could not be. So, using a graph such as this is worse than having no graph at all.
Source: Joel Best, Stat-Spotting: A Field Guide to Identifying Dubious Data (2008), pp. 22-24. The example graph is on page 23, and originally accompanied an article from Newsweek magazine.
Resource: The One-Dimensional Pictograph, 8/1/2013
January 16, 2015 (Permalink)
A Major Fallacy
It's time once again to play "Name That Fallacy!". This is the game where you read a passage and name the fallacy committed in it. So, let's get started:
"…I thought you'd been in the cavalry," Fen said to the Major as they walked on. "Before it was mechanised, I mean."
"Quite right, my dear fellow. Twenty years of it, I had, in India."
"But didn't that get you used to horses?"
"No, the reverse," said the Major. "The more I saw of horses, the more unused to them I got. I was drunk for a week," he confided, "celebrating the day they took them all away. Because after they'd gone, don't you know, I couldn't have a fall."
"You mean you'd had a lot of falls."
"No, none. I never had a fall, not even when I was learning to ride, as a child. Well, you can see what that implied. Theory of Probability and so forth," said the Major…. "The longer I went on without having a fall, the more likely it became that I would have one. In the end it got a bit unnerving, because every time I got on a horse, the chances were about a billion to one against my not having a fall. I won through, though," he said proudly. "I survived. No fall. I'm here to tell the tale. …"
When you think that you can put a name to the fallacy committed by the Major, click the link below:
Source: Edmund Crispin, The Glimpses of the Moon (Avon, 1979), p. 21
January 13, 2015 (Permalink)
"I was told there would be no math."
Perhaps it would be too much to expect that business and economics writers for Slate could do simple math. But it shouldn't be too much to expect that they know how to use a calculator. Check out the following correction appended to an article from Slate's "Moneybox", "a blog about business and economics"―see the Source, below:
Correction, Jan. 12, 2015: This post originally misstated that Burger King’s 10-piece chicken nuggets were selling for $1.49, or 10 cents per nugget. $1.49 for 10 nuggets is about 15 cents per nugget.
I suppose that it's also too much to expect that Slate might have fact checkers. "Who do you think we are," I imagine the magazine's writers replying, "the bleeping New Yorker?" But does "Moneybox" at least have an editor? Since it's called a "blog", perhaps that means that it's unedited.
I point this out not just to make fun of Slate―that's a bonus!―but because there's a serious issue here: It's a mistake to assume that reporters are able or willing to do even the simplest math. Apparently, we can't even expect journalists who specialize in economics and business to be able to move a decimal point one place to the left.
Source: Alison Griswold, "Burger King Hopes You’re Lovin’ Its 15-Cent Chicken Nuggets Promotion", Slate, 1/12/2015
Resource: Innumeracy at Slate, 11/6/2011
January 8, 2015 (Permalink)
The Fine-Tuning Argument Strikes Again!
On Christmas day last year, The Wall Street Journal published an article with the following headline―see Source 1, below:
Science Increasingly Makes the Case for God
In the article beneath, Eric Metaxas made a scientific case for the existence of a god. What I don't understand is why he didn't make the scientific case for Santa Claus. Metaxas' argument is the familiar one based on "fine-tuning", which I've already had my say about―see the Resource, below. Even if all of Metaxas' scientific claims are correct―which I doubt―that doesn't affect the fundamental problem with the fine-tuning argument.
…[T]he odds against the universe existing are so heart-stoppingly astronomical that the notion that it all “just happened” defies common sense. It would be like tossing a coin and having it come up heads 10 quintillion times in a row. Really?
No, not really. This is a bad analogy. It's more like being dealt a hand in bridge: the odds against being dealt any particular bridge hand are also astronomical―specifically, 1 in 635,013,559,600―but nobody concludes that the deck must have been stacked by the dealer to produce that particular hand.
Moreover, Metaxas seems to think that if the odds are a billion to one "against the universe existing" instead of a million to one, then the argument is stronger. Would odds of a trillion to one instead of a billion to one make the argument a thousand times stronger? If the argument was any good then odds of a million to one would be plenty, but odds of a gazillion to one won't turn a bad argument into a good one.
Also, with "odds against the universe existing" of ten quintillion to one, why doesn't Metaxas conclude that it doesn't really exist, despite appearances? If you really must draw a religious conclusion from this claim, why not conclude that the physical universe is an illusion? While I'm pretty convinced that the physical universe does indeed exist, I'm not sure that the evidence of its existence is strong enough to overcome a prior probability against it of 10 quintillion to one.
By the way, I'm not completely joking about Santa Claus. Metaxas published a book last year titled Miracles, which I see from the miracle of Amazon's "Look Inside!" feature has a chapter on "The Miracle of the Universe". Is this article drawn from that chapter of the book? Unfortunately, "Look Inside!" doesn't let me look inside that chapter, and I don't currently have access to a copy of the book, so I don't know. However, the usual argument against Santa is that it would take a miracle for him to do what he's supposed to do on Christmas eve. Thankfully, Metaxas seems to have overcome that objection.
Source: Eric Metaxas, "Science Increasingly Makes the Case for God", The Wall Street Journal, 12/25/2014.
Resource: The Lottery Fallacy, 7/3/2014.
Via: Steven Novella, "The Science of God", Neurologica Blog, 1/8/2014. Novella has a go at debunking the questionable science in Metaxas' article.
January 6, 2015 (Permalink)
Check it out…or don't!
Philosopher and critical thinker Tim van Gelder had an opinion piece published in the Australian newspaper The Age last month on the perils of all-or-nothing thinking, though he doesn't call it that―see the Source, below. Instead, he calls it "Booleanism" after the logician and mathematician George Boole.
I would point out, in defense of Boole, that he was probably not a "Boolean" in van Gelder's sense of the term, and Thomas Bayes was almost certainly not a "Bayesian" as that term is currently used. Bayes was merely the first to prove the theorem that goes under his name―or, at least, the first to get credit for it. However, "Bayesianism" has come to refer to a whole philosophical view about probability developed by subsequent philosophers. In fact, van Gelder's description of Bayes' theorem as "a basic law of probability governing how to modify one's beliefs when new evidence arrives" is a reflection of that later theory, and not the way that Bayes himself would have described it. I mention this not to criticize the theory, since I'm as Bayesian as the next Bayesian―probably more so!
Of course, the proper Bayesian response to the Booleanism versus Bayesianism debate is not to reject Booleanism wholesale―that's the Boolean response to Bayesianism. Rather, the Bayesian should realize that Booleanism is often a useful approximation: not every issue is black or white, but some are. Furthermore, many issues are dark grey and off-white. In other words, the world may be painted in shades of grey, but some of those shades are so close to black or white as to make no practical difference. For example, you really are either going to read the whole thing or you won't―I suggest the former! So, all joking aside, check it out.
Source: Tim van Gelder, "Do you hold a Bayesian or Boolean worldview?", The Age, 12/3/2014
Via: Tim van Gelder, "From Booleanism to Bayesianism", 1/6/2015
Fallacy: The Black-or-White Fallacy
December 27th, 2014 (Permalink)
New Book: Cognitive Productivity
Psychologist Luc P. Beaudoin's new book Cognitive Productivity: Using Knowledge to Become Profoundly Effective is now available as an ebook. A couple of smart and lucky readers have won copies―see the Resources, below―but now you can get yours without having to be smart or lucky!
December 23rd, 2014 (Permalink)
A Nutty Holiday Puzzle
It was a hectic day at the Allnut Gourmet Nut Company, as it was the last day to ship out orders that could be expected to arrive in time for Christmas. The company sold four different types of vacuum-packed canned nuts in its "Just Plain Nuts" line: chestnuts, almonds, cashews, and a mixture of almonds and cashews. If the shipment had to wait another day, at least some of the orders would have to be shipped with expedited delivery, which would cut into the company's profits.
Susan Allnut, who was in charge of shipping for the family business, was sitting in her office that morning when her phone rang. "Sue! Sue!" came a frantic voice over the line.
"Tom?" Sue replied, recognizing the voice of her brother, Thomas, who was in charge of production. "What's the matter?"
"Hold the shipment!" Tom shouted into her ear.
"Calm down, Tom!" she almost shouted back. "What's the matter?"
"The labels on the cans are all wrong!"
"What?" Sue felt a sinking feeling in the pit of her stomach. If the shipment was delayed more than a day or two, the company might lose money over the holidays. Black Friday this year could be very black indeed.
"We're going to have to open all the cans to find out what's in them, and then recan and relabel the whole shipment!"
"Now, don't panic, Tom, just tell me exactly what happened."
"The labels were loaded incorrectly into the labelling machine on the assembly line, and all the cans were labelled incorrectly!"
There was a long pause as Susan thought. "But you don't know what cans the wrong labels went on?" she finally asked.
"No, all we know is that they're on the wrong cans." Tom groaned.
There was another long pause, then Tom interjected: "Sue, are you still there?"
"Yes, I'm thinking! You mean none of the cans in the current shipment has the right label?"
"That's right! If it says 'almonds' on the can there won't be a single almond in there; if it says 'cashews', no cashews. It's a disaster!"
There was another long pause as Susan thought, and then she sighed with relief.
"Relax, Tom. It's bad but it's not a disaster. We won't have to open any cans; just switch the labels. It'll be a lot of work, but we can still get the shipment out today."
"But how are we going to know what labels to put on the cans without opening them?"
How, indeed? How did Susan Allnut solve the problem of the mislabeled cans? If you think you know the solution, click on the link below.
December 21st, 2014 (Permalink)
Check out Alison Hudson's defense of Wikipedia in an article published on Skeptoid earlier this month―see Source 3, below. Despite my specific criticisms and general skepticism about Wikipedia in these "watches" over the years, I agree with much of what Hudson writes. However, I do have a few specific disagreements that I'll lay out below.
In defending the reliability of Wikipedia, Hudson writes: "Vandalism happens, but it’s usually caught fairly quickly and reverted; and the vandals are usually blocked and banned." However, this claim is unwarranted, and is a good example of what is called "survivor bias"―see Sources 1 and 2, below. We don't know how quickly "vandalism" is caught and corrected, or "vandals" banished, because we only know about those who are caught. For all we know, the vandals who are quickly caught are only the most incompetent ones.
Similarly, Hudson claims that "you’ll notice that most of the longstanding [hoaxes] were able to survive mostly because they were small, unimportant topics that people weren’t likely to be referencing anyway…." How many hoaxes have yet to be exposed? We don't know and, as a consequence, we can't accurately judge reliability in this way.
Hudson is on firmer ground when appealing to studies that have had experts examine articles for accuracy, and I have no specific objections to them, but then I haven't examined the studies myself. However, such studies seem to have focused on particular areas, such as information about drugs, that may attract better contributors than other areas. As a result, it's dangerous to generalize to the entire project based on such narrow studies. Instead, what is needed is a random selection of articles to be evaluated by appropriate experts and, judging from what Hudson reports, such a study has yet to be performed. Until it is, we're in a poor position to evaluate the general reliability of Wikipedia.
In general, my view is that Wikipedia is not an encyclopedia in the traditional sense because it lacks authoritativeness, which is different from reliability. At the least, it should not be used as encyclopedias have traditionally been used, that is, as authoritative statements of what is currently known about a topic.
Rather, Wikipedia should be used as a guide to further research, that is, as a sort of written directory to internet resources. It's alright for it to be your first stop in researching a topic, but it should seldom if ever be your last one, which is why educators should forbid citing it in research papers, except in unusual circumstances.
On a positive note, I entirely endorse the following sentiments:
…I often actually tell my students to start with Wikipedia when they conduct research. Many times students, like your typical Internet commenter, know a little bit about a topic but not nearly enough to go on at length. In fact, in some classes I will actually assign the Wikipedia article as a reading assignment and then have them answer some pointed questions based on the information found there. They’re going to read it anyways; I might as well acknowledge the fact and make sure everyone’s got the basics down before they begin the real research. … Of course, I also tell my students to verify information in a second source, because I’m aware that any single source of information may be flawed. That’s not my stance just on Wikipedia, but on any important fact. Starting with Wikipedia is fine; but ending with Wikipedia is a lazy way to do research.
- "Survivorship bias", Wikipedia (Accessed: 12/21/2014)
- Jordan Ellenberg, How Not to be Wrong: The Power of Mathematical Thinking (2014), pp. 8-9
- Alison Hudson, "Stop Wikipedia Shaming", Skeptoid, 12/1/2014
December 14th, 2014 (Permalink)
Charts & Graphs
This installment's chart doesn't fit into any of the types of misleading graphs we've seen previously, but has its own unique problems.
First of all, a bar chart is not the best choice for conveying this information, which could be conveyed better in words, or perhaps in a pie chart that is subdivided into smaller slices. The bars represent a percentage of a whole―all rapes―with each bar representing a finer slice of that whole. This is why the bars get smaller as they descend the chart, except for the last bar which suddenly represents the remaining rapes outside of the 3 involving prison time. The chart shows that 40 out of 100 rapes are reported, 10 of that 40 lead to an arrest, 8 of that 10 are prosecuted, 4 of that 8 get a conviction, and finally 3 of that 4 lead to prison sentences.
Now, my point here is not to criticize these claims, since most of these statistics are gathered by the police and courts and may well be accurate. However, the claim that only 40% of rapes are reported to the police must be an estimate based on a survey, since the police cannot know for sure how many rapes are not reported to them. Nevertheless, let's accept these figures as accurate for the purpose of analyzing the way that the chart represents them.
The problem that I want to focus on is a conceptual one, namely, that the graph begins at the top talking about one thing―rapes―and ends up talking about a different thing―rapists. Its main point seems to be that only 3% of rapists end up doing prison time for their crimes, but what the next to last bar from the bottom represents is the number of rapes that end in prison time for the convicted rapist. At first glance, this may seem to be the same thing, but it's not. For it to be the same thing there would need to be a one-to-one relationship between rapes and rapists, that is, each rape would have to have been committed by a distinct rapist. We've seen this assumption before in a misleading chart about rape and rapists―see the Resource, below, under point 4―so it seems to be a tempting mistake to make.
To see that it is a mistake, suppose that the three imprisoned rapists were not only guilty of the rapes they were imprisoned for, but between them were guilty of the remaining 97 rapes out of the hundred. This would mean that not 3% of rapists did time, but 100%. Of course, this is a very unlikely scenario, but it is not unlikely that most rapists who go to prison are guilty of other rapes for which they don't serve time. In fact, according to another page from the same organization responsible for the graph: "rapists tend to be serial criminals"―see Source 1, below. So, even if only three rapes out of a hundred lead to a rapist going to prison, that rapist may well be guilty of other rapes which were either unreported, did not lead to the rapist's arrest, were not prosecuted, or for which he was not convicted. In this way, 3% of rapes leading to a prison sentence may cause greater than 3% of rapists to do hard time.
- "97 of Every 100 Rapists Receive No Punishment, RAINN Analysis Shows", Rape, Abuse & Incest National Network. This page has a graph similar to the one above but with slightly different numbers for some unexplained reason.
- "Reporting Rates", Rape, Abuse & Incest National Network.
Resource: Charts & Graphs, 1/13/2013
Acknowledgment: Thanks to Ryan J for reporting this chart.
Correction (12/29/2014): I've rewritten the last paragraph to correct a confusing and misleadingly worded example.
The gambler's fallacy extends to the financial and investment markets. There is simply nothing like a fail proof investment or "get rich scheme" in the new trend of binary options. Although there are now serious binary option providers in Germany, such as BDSwiss which is EU regulated, prospective investors are cautioned to have realistic expectations to avoid the investor's fallacy.
With that said, one fallacy proven to be untrue is that binary options online are an outright scam. EU regulated brokers such as Top Option offer legal trading services that extend from North America into the European trading market including Germany, making binary options trading a safe investment. Additionally, withdraws and transactions can be conducted through various platforms including PayPal.
Online dissertation services such as MastersThesisWriting may help with your thesis or dissertation.