Previous Month | RSS/XML | Current

WEBLOG

April 22nd, 2019 (Permalink)

Junk Headline

What's wrong with the following headline?

Poll: 39 percent say Mueller failed to prove Trump campaign did not collude with Russia1

Let's start with the poll question. Why was this question asked when it was? The poll2 ended before the report on the Mueller investigation was released, so no member of the general public had been able to read it at the time of polling3. Why ask people for their opinions on the results of an investigation before they can have read the report?

Here's another recent headline that's relevant:

Who Really Cares About Mueller Report? New Poll Finds One-Third Of Voters Don't Even Know Who William Barr Is4

This is a report on the same poll as the previous headline, but focusing on a different question that shows just how little many of those polled know.

Here's the poll question whose results are reported in the first headline: "Do you think the Mueller investigation proves Donald Trumpís campaign did not collude with the Russians during the 2016 campaign, or does it not prove that?5" This is a confusing question given that the choices offered for answering it are: "Yes", "No", and "Don't know".

At this point, we need to look at some erotetic logic, that is, the logic of questions and answers. I'll try to keep it as brief and non-technical as possible, but some technical language is unavoidable. To start with, the poll question is a disjunctive one, that is, one of the form: "Is P or Q the case?" Disjunctive questions are not yes-or-no questions since to answer "yes" would merely affirm that one of the disjuncts is the case; answering "no" rejects both disjuncts, thus denying the presupposition of the question.

For example, suppose you feel a draft and ask: "Is the door or the window open?" An answer of simply "yes" is likely to be annoying since all that the answer tells you is that one or the other is indeed open, but you want to know which. In contrast, if the answer is "no", you'd assume that neither door nor window was open. Instead, a direct answer to a disjunctive question is any one of the disjuncts, so to answer the question directly is to say either: "The door is open" or "the window is open"6.

An additional point about the poll question is that the second disjunct is the negation of the first, so that the question has the more specific logical form: "Is P or not-P the case?" To answer this simply "yes" or "no" would be either to truly affirm a tautology or falsely deny it.

To adapt the previous example, suppose that you ask: "Is the door open or is it closed?" If the answer comes back: "yes", you'll probably be angry at the answerer since, again, you want to know which. In these sort of situations, people usually assume that a "yes" answer means that the first, affirmative disjunct is the case, and a "no" means that the second, negative one is.

I assume that most of those who answered the poll question assumed that a "yes" answer affirmed the first disjunct and a "no" affirmed the second, which is equivalent to denying the first. The 39% referred to in the first headline, above, are those that answered "no" to the poll question5.

So, probably most of those who took the poll were able to decipher this question, despite its logical problems. Nonetheless, potential confusion could have been easily avoided simply by rewording the three possible answers to something like: "Yes, it proved no collusion", "No, it did not prove no collusion", and "I don't know".

I did a double take when I first read the headline because of its two negations: "not" and "failed". They are not a double negation that simply cancels out, as the negation of "failed" has wide scope and "not" has narrow scope. In other words, "not" negates only the predicate "colludes with Russia" whereas "failed" negates the claim that "Mueller proved Trump campaign did not collude with Russia".

As a general rule, one should use as few negations as possible―either one or none―since multiple negations are hard to cognitively process. One way to eliminate the narrow scope negation is realize that to prove no collusion is to exonerate, so the headline could have been reworded as follows:

Poll: 39 percent say Mueller failed to exonerate the Trump campaign of collusion with Russia

This is easier to understand, but yet another problem lurks here, namely, the word "failed". This word can't be blamed on an editor, since the article under the headline begins: "Almost 4 in 10 Americans think special counsel Robert Mueller's investigation failed to prove President Trump's campaign did not collude with Russia….1" Nor are the pollsters to blame for the occurrence of the word "fail" in the headline since, as we have seen, the poll question did not use that word.

You cannot "fail" at something unless you try to do it. For instance, I did not fail to climb Mount Everest since I never even tried. What reason do we have for thinking that Mueller tried to exonerate the Trump campaign? Since when was it his job to prove that Trump did not collude with Russia? Did the writer of the article presume that the Trump campaign was guilty of collusion and that Mueller's job was to prove its innocence? What happened to the presumption of innocence7?

These considerations lead to a final revision of the offending headline:

Poll: 39 percent say Mueller did not exonerate the Trump campaign of collusion with Russia

The news media like polls because they are manufactured news: instead of waiting for something to happen, just commission a poll and no matter what the results you get instant news. In this case, the poll question was premature, badly worded, and the article shouldn't have focused on it. Moreover, the article gives the false impression that the purpose of the Mueller investigation was to exonerate the Trump campaign.

"Fake news" is "news" that reports things that aren't true, so this isn't fake news. Rather, it's junk news―the journalistic equivalent of junk food: fast, cheap, and low in nutrition.


Notes:

  1. Chris Mills Rodrigo, "Poll: 39 percent say Mueller failed to prove Trump campaign did not collude with Russia", The Hill, 4/18/2019.
  2. "Fox News Poll", Fox News, 4/18/2019.
  3. The first sentence of the story under the headline reads, in part: "…according to a Fox News poll released a day before a redacted version of the report was slated to be made public."
  4. Daniel Moritz-Rabson, "Who Really Cares About Mueller Report? New Poll Finds One-Third Of Voters Don't Even Know Who William Barr Is", Newsweek, 4/18/2019. More precisely: 30%; see question 13 in the poll report, linked to in note 2, above.
  5. See question 23 in the poll report, linked to in note 2, above.
  6. Unless, of course, the correct answer is: "I don't know." If the question is based on an incorrect presupposition, the answer is: "Neither".
  7. The presumption of innocence is not just a legal or ethical presumption but a logical one as well: the burden of proof is not on President Trump or his administration to prove that it did not collaborate with Russia, but on those who make that accusation.

April 15th, 2019 (Permalink)

Charts & Graphs: The IRS Baked Two Pies for Tax Day

The two pie charts below appear near the end of the booklet of instructions1 for filling out and filing the 1040 tax form put out by the Internal Revenue Service that is due today. The charts serve no purpose in helping figure taxes, and most taxpayers probably ignore the page where the charts occur in the rush and hassle of preparing a return. Instead, the charts give information on what percentage of the government's income comes from income taxes, and what those taxes pay for. Income & Outlays

These charts are three-dimensional pie charts which, as I've pointed out previously2, can be misleading. These are particularly bad examples of this type of chart since the angle from which the "pies" are portrayed is quite acute. This means that the areas of the pies are distorted, so that some look larger in comparison to others than they should. Furthermore, these are "deep dish" pies with thick edges that make the wedges facing the viewer appear to be larger than similarly sized wedges at the back of the pies, whose edges cannot be seen.

For instance, in the "Income" chart, the wedge for "Corporate income taxes" looks almost the same size as that next to it for "Borrowing to cover deficit", despite the fact that the former is only 7% of the pie while the latter is 17%. Similarly, the wedge for "Social security, Medicare,…" behind them represents almost a third of income, but appears to the eye to be about a quarter.

In the "Outlays" chart, the wedge facing the viewer representing "National defense,…" appears to be considerably more than 20% of the pie. In contrast, the largest segment, labelled "Social security, Medicare,…", is over twice the percentage of that for "National defense,…", but doesn't appear to be twice the size.

I don't suppose that these charts were intentionally constructed to mislead, especially since the percentage for each segment is included next to its label. However, there's not much point in baking a pie chart if you have to read a bunch of numbers in order not to be fooled. Instead, a couple of tables listing the parts and their percentages would have conveyed the same information with no risk of misleading the reader. If we must be served with pies, then the angle from which we view them should be close to 90°.


Notes:

  1. P. 112. See also: "Major Categories of Federal Income and Outlays for Fiscal Year 2017", Internal Revenue Service, accessed: 4/14/2019.

Previous Month | RSS/XML | Current