Halloween Christmas Puzzle
Did you open your Christmas presents this morning? If not, you should have, because Oct. 31 = Dec. 25. Can you see why?
Book Club: Wrong,
Chapter 1: Some Expert Observations
…[L]et's start off this chapter by looking at some of the problems that cause less formal…experts to get off track. In many ways these problems are more obvious and easier to understand than some of the others, and they'll give us a good foundation for examining the additional components of misleading expert advice later on. That, in turn, will give us clues as to how to avoid some of the advice that's likely to turn out to be bogus. (P. 18)
For this and subsequent installments of the Book Club, you'll have to get your hands on a hard copy if you want to read along. Much of the chapter consists of journalistic anecdotes about supposed experts giving conflicting advice about such things as CPR (pp. 14-15), bad predictions about the real estate market just before the bubble burst (pp. 15-16), and mistaken evaluations of football players (pp. 18-19).
Such anecdotes are interesting, but there's some danger that their vividness will lead some people to overestimate the likelihood of such errors―this is the anecdotal fallacy. If you're naive enough to think that "experts" are never wrong, then these examples show otherwise, but if you only think that they're seldom wrong, or usually right, then the anecdotes prove nothing. I hesitate to suggest that Freedman is attacking a straw man, because I'm unsure of the point of the examples.
In the Introduction, as we saw in the first installment, Freedman introduced the notion of a "mass" expert, that is, the sort of "expert" who is called upon by the mass media to make pronouncements and prognostications. In this chapter, he introduces a distinction between "formal" and "informal" experts (p. 17). It's not entirely clear what an "informal" expert is supposed to be, but the contrast seems to be with scientists as "formal" experts. I assume that anyone with an advanced degree in a subject would count as a "formal" expert in that subject, so that I'm a formal expert in philosophy. What does that make Freedman? Given that there is no advanced degree in "wrongology", is he an informal expert in it?
According to Freedman, one reason for the failures of informal experts is a lack of good data (p. 20). For instance, in the case of picking football players, a lot of intangibles that are difficult to quantify affect how well an athlete will play. That's probably right, but Freedman does a poor job of making the case that football scouts are worse at picking players than, say, I would be. Freedman claims, no doubt correctly, that such informal experts are exercising judgment. However, a couple of prominent failures doesn't show that their judgment would be no better than a non-expert's.
In the latter part of the chapter, Freedman lists six "traps" that may lead informal experts to be wrong (now we're getting somewhere!):
- Bias and Corruption (pp. 28-29)
Freedman suggests that this is a worse problem for informal experts. Some of the people who touted the real estate bubble would seem to fall here, since they had financial interests in a rising market.
- Irrational Thinking (pp. 29-31)
Freedman specifically mentions The Secret as an example (p. 31). Now, I certainly agree that The Secret is a case of irrational thinking (see the Resource, linked below), but it makes me wonder about Freedman's notion of an "informal" expert. As I mentioned in my review of The Secret, as far as I can tell most of the co-authors of that book are not experts in anything, and none of them is an expert on the topic of the book, since there are no experts on "The Secret" any more than there are experts on unicorns. If Freedman's point is simply that a lot of the people who are informally promoted in the media as "experts" or "authorities" are in fact no such thing, then I couldn't agree more.
- Pandering to the Audience (p. 31)
This seems to be a case of wishful thinking, that is, telling people what they want to hear, which I suspect is a problem not only for "informal" experts but for many real ones. For instance, a completely honest physician might have to worry about losing patients to an unreasonably optimistic one who promises more than can be delivered.
- Ineptitude (pp. 32-34)
What reason is there to think that experts―even "informal" ones―are any less "ept" than the rest of us?
- Lack of Oversight (p. 34)
This is presumably worse for informal experts, and may be part of what makes them "informal".
- Automaticity (pp. 34-36)
Freedman seems to be referring here to pattern recognition, the development of which is usually part of what makes someone a genuine expert. To take an example from my own area of expertise, part of becoming an expert in logic is developing the ability to recognize patterns in arguments. Valid arguments exemplify patterns such as modus ponens, and fallacious ones exhibit different patterns such as affirming the consequent. Learning to quickly and easily recognize these patterns is part of logical expertise.
Now, the kind of "automaticity" developed by experts may pose a danger not faced by a non-expert, namely, "blinking" instead of thinking. For instance, a physician may rapidly diagnose a patient as having a common disease when, in fact, the patient has an uncommon disease with somewhat similar symptoms. Non-experts might actually be more likely to correctly catch such a rare disease because they have to diagnose by a painstaking, step-by-step process of matching symptoms to lists of those associated with different diseases.
I think that this is a real problem for experts, formal and informal alike, but it's one that can be compensated for. Experts who are aware of the problem can double-check whenever there is a possibility of confusing similar patterns. Thus, one step in dealing with it is to raise awareness of the problem among experts.
In the next installment, we'll take a look at the "formal" experts, namely, scientists.
Resource: Book Review: The Secret, Fallacy Files Book Shelf
Chapters 2 & 5: The Trouble with Scientists, Parts 1 & 2
Check it Out
David Freedman, author of our book club selection Wrong, has a very interesting article in the current issue of The Atlantic. By the way, the next installment of the book club should appear soon. The article is a profile of John Ioannidis, whom we discussed in the first book club installment. He's the researcher who claims that most published medical research is wrong.
Maybe I'm just an optimist, but I find the following results of Ioannidis' research reassuring, rather than discouraging:
…80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.
In other words, 90% of large randomized trials don't turn out to be wrong! I'm not sure whether that means they turn out to be right, but at least it means that they haven't been contradicted by later research. Also, 75% of smaller randomized trials are not later contradicted! The research glass is not a quarter empty, it's three-quarters full!
I'm not surprised that most non-randomized studies turn out to be wrong, since these must be mostly observational studies. At best, such studies generate hypotheses for further testing, and never should be treated as definitive. Moreover, as I mentioned in the book club installment, even a large randomized study ought to be viewed with some skepticism until it's either replicated or its conclusions supported by independent evidence.
In science, the confidence level that is generally considered significant is 95%, that is, there's no more than a 5% probability that the result is due to chance. In other words, one out of twenty randomized studies may be wrong simply because of bad luck, and we have no way of knowing from the studies themselves which are the unlucky ones.
Moreover, due to publication bias―the fact that studies that show statistically significant results are more likely to be published than those that don't―the actual percentage of published studies that are wrong just by chance can be expected to be higher. In effect, we're seeing only the tip of the iceberg of research studies, as there's an unknown quantity of unpublished studies that failed to reach statistical significance. For these reasons and others, it's to be expected that a lot of published research is wrong.
Apparently, I'm not alone in my optimistic assessment of Ioannidis’ research:
David Gorski…noted in his prominent medical blog that when he presented Ioannidis’s paper on highly cited research at a professional meeting, "not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings."
Ioannidis' final comment sums it up well:
"Science is a noble endeavor, but it’s also a low-yield endeavor," he says. "I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact."
Source: David H. Freedman, "Lies, Damned Lies, and Medical Science", The Atlantic, 11/2010
Resource: Book Club: Wrong, Introduction, 9/16/2010
The best contextomy for a movie ad…ever
Beware the ellipsis! Those three little dots can conceal a lot. For instance, consider the following blurb from an ad for the new Bruce Willis vehicle, "Red":
"THE BEST CAST FOR AN ACTION COMEDY…EVER."
Roger Moore, ORLANDO SENTINEL
What's hidden behind those three dots? Of course, some blurbs for movies leave something out and don't even bother to mark the omission, so we should be grateful for the ellipsis since it warns us that something―perhaps something important―is missing. Here's the full context of the quote:
Oh, that "Red" was the giddy romp it might have been, it promises to be or it thinks it is. It has the best cast of any action comedy movie, maybe ever: four-Oscar-winners plus Bruce Willis. But despite that and a winning concept…director Robert "Flightplan" Schwentke never lets this one achieve takeoff.
So, is this the best contextomy for a movie ad ever? Maybe!
- Ad for "Red", Dallas Morning News, 10/15/2010
- Roger Moore, "Movie Review: Red", Orlando Sentinel, 10/13/2010
Title: Sex, Drugs, and Body Counts
Subtitle: The Politics of Numbers in Global Crime and Conflict
Editors: Peter Andreas & Kelly M. Greenhill
Date of Publication: 2010
As the chapters in this volume demonstrate, issues such as sex trafficking, terrorist financing, drug smuggling, war fatalities, and refugee flows are especially susceptible to distortion and manipulation, given that they not only present particularly severe measurement challenges but are highly emotive "hot button" issues that can inhibit critical sensibilities. The advantage of focusing on such extreme, "easy" cases is that the power and pathologies of politicized numbers are most starkly revealed. Though the politics of numbers may be subtler in other policy realms, this does not necessarily make it less influential and consequential. In this regard, the politics of numbers in the realms of global crime and conflict is distinct but not unique.
Review: This book is an anthology of articles about the political role played by numbers in international relations, specifically, how numbers are inflated or downsized for political reasons. The "sex" of the title refers to human trafficking across international borders for purposes of prostitution, "drugs" to smuggling illegal drugs, and "body counts" to counting civilian casualties of war or alleged genocides. Andreas and Greenhill (A&G) are political scientists, as are five of the other authors, with a couple of sociologists and anthropologists thrown in to liven things up.
The book focuses on matters which are by their natures difficult to count, either because they are illegal or because of the dangers and chaos caused by war and genocide. A further difficulty is that so many of those involved in counting bodies dead or smuggled across a border have interests in either downplaying or exaggerating the numbers.
There are eleven separate articles in this anthology, counting both an introduction and conclusion by the editors. Among the other selections, four focus on measuring the trafficking of either people or drugs, three concern counting casualties of armed conflicts, and a couple are on other topics. This review will focus on the editors' introduction and concluding chapter, which give an overview of "the politics of numbers". Readers who want to know the details about the counting of the death tolls in Kosovo, Bosnia, or Darfur, will want to read chapters 6, 7, or 8, respectively.
Given the difficulty of counting illegal activities or casualties in wartime, there is a sort of information vacuum that is easily filled by the first number that happens along. This number may well be a wild guess, but once it's filled the vacuum it can be almost impossible to dislodge. Like one of those movie monsters that keeps coming back after it's been "killed", wrong numbers can survive even repeated debunkings. A&G attribute this stubborn persistence to two psychological effects:
- Anchoring (p. 17): This is the tendency for people to calibrate their estimates based on whatever is the first number they hear, even when that number is arbitrary, and even when they know that it is arbitrary, or when it is later known to be in error. Thus, whoever is first out of the box with an estimate of a quantity is likely to affect public opinion. For instance, a low-ball first estimate is likely to lead the public to underestimate the quantity, even if the estimate is later increased. In contrast, a high-end estimate will probably bias public opinion towards a larger quantity, despite subsequent downward corrections.
- Confirmation bias (p. 18): This is the psychological tendency for people to pay attention only to evidence that confirms their prejudices, ignoring or downplaying counter-evidence. This is probably one reason why it is so difficult to correct misleading numbers once they gain public currency. An institutionalized version of this problem is that, while many newspapers and magazines issue corrections, they seldom make corrections as prominent as the original errors and, as a result, many readers will never see them.
A recent example of the kind of problems with numbers that this volume discusses are the estimates made of the amount of oil spilled into the Gulf of Mexico this summer by the BP Horizon deep-sea well. Of course, this case is too recent to be discussed in the book, and it doesn't concern sex, drugs, or body counts. However, in common with the book's examples, it was difficult to make an accurate estimate of the amount of oil being leaked due to the extreme depth of the well. A national commission on the oil spill has recently released a draft working paper which discusses these estimates:
The federal government's estimates of the amount of oil flowing into and later remaining in the Gulf of Mexico…were the source of significant controversy, which undermined public confidence in the federal government's response to the spill. By initially underestimating the amount of oil flow and then, at the end of the summer, appearing to underestimate the amount of oil remaining in the Gulf, the federal government created the impression that it was either not fully competent to handle the spill or not fully candid with the American people about the scope of the problem.
Loss of credibility, which A&G call "blowback" (p. 273), is not the only bad consequence of politicized numbers. According to A&G, incorrect estimates can lead to misallocation (p. 268) and misapplication of resources (p. 269), and mistaken evaluations of the effectiveness of policies (p. 270).
While the writing in Sex, Drugs, and Body Counts is far from the worst you can find from academic writers, the book could have used a serious editing. I didn't come across any impenetrable sentences, but there's a lot of mind-numbing repetition. Speaking of numbers, everything seems to come in twos or threes, for instance:
Accessing this micro-politics necessarily requires careful tracing of policy discourse and deliberations, bureaucratic practices and routines, the activities and incentives of intergovernmental and nongovernmental organizations. (P. 276)
An editor armed with a blue pencil could reduce this to: "Accessing this micro-politics necessarily requires careful tracing of policy deliberations, bureaucratic practices, and the activities and incentives of organizations." Any loss of the nuance between word pairs such as "discourse"/"deliberations" and "practices"/"routines" is more than made up for by the gain in brevity. Moreover, "intergovernmental and nongovernmental" seems to pretty much cover all organizations, though I suppose that one might want to distinguish governmental and intragovernmental organizations, as well. However, the micro-politics requiring assessment surely takes place in organizations of all types, so that the qualifiers are not only unnecessary but misleading. This looks like unnecessary padding, and the two "and"s in the last clause seem to have misled the authors into thinking that no "and" was needed to introduce it―I have supplied it in the edited version, in case you didn't notice. This, of course, is a horrid example, and not representative of the writing in the book which is probably better than average.
There's also some cringeworthy jargon, though again probably less than you'll find in many other social science books. For instance, scholars don't research, they do "knowledge production", which has the virtue of being one word longer. When producing knowledge scholars don't ask or answer questions, they "interrogate" them (p. 9). I just hope that they don't engage in "enhanced" interrogations of questions! This is a type of doublespeak that imitates a technical vocabulary, thus giving the false impression of being more scientific than it is, while tending to frighten outsiders away. This last effect is unfortunate, since just about everyone would gain from some lessons in skepticism about numbers in the news.
To sum up, if you're a scholar interested in the politics of numbers in the specific areas covered by the book, or interested in one or more of the specific cases addressed by individual chapters, then I probably don't need to recommend it to you. If you're a layperson or scholar in an unrelated field, then you'll probably do better with one or more of Joel Best's books, which cover the same general territory more accessibly.
Source: "The Amount and Fate of the Oil", National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, 10/6/2010
Blurb Watch: Legend of the Guardians
Here's a blurb from an ad for the new movie Legend of the Guardians:
"RIVALS 'AVATAR' FOR PURE ARTISTRY."
Bill Zwecker, CHICAGO SUN-TIMES
For comparison, here is the full sentence from which the blurb was taken (emphasis added): "At times, it even rivals 'Avatar' for pure artistry."
This is an example of one of the fallacies first identified by Aristotle, namely, secundum quid, which is the fallacy of dropping or ignoring qualifications. In this case, the qualification "at times" was silently dropped, thus changing the meaning of the quote.
- Ad for Legend of the Guardians, Dallas Morning News, 10/8/2010
- Bill Zwecker, "Taking wing against true evil", Chicago Sun-Times, 9/24/2010
How about mild longing while they're at it?
"Oct." is an abbreviation for "octal", that is, base 8, while "Dec." is an abbreviation for "decimal", our familiar base 10. In base 8 notation, 31 = 3×8 + 1×1 = 25, in decimal notation. So, octal 31 = decimal 25. Merry Christmas!
Source: Martin Gardner, Mathematical Magic Show (1978), pp. 72 & 79-80.