Recommended Reading

Previous Month | RSS/XML | Current | Next Month

WEBLOG

November 30th, 2021 (Permalink)

Acknowledging Reality & Living by Lies


Notes:

  1. Chris Conley, "Kenosha damage estimate: $50-million", WHBL, 9/10/2020
  2. Here's the suppressed article: Nellie Bowles, "Businesses Trying to Rebound After Unrest Face a Challenge: Not Enough Insurance", The New York Times, 11/9/2020

Disclaimer: I don't necessarily agree with everything in these articles, but I think they're worth reading as a whole. In abridging them, I have sometimes changed the paragraphing and rearranged the order of the excerpts in order to emphasize points. I have also de-capitalized the word "black" in the excerpts from McWhorter's article, since this is itself racist newspeak given that "white" is not also capitalized. Either both should be capitalized or neither; I choose neither, since that's the way it's been done until very recently. I wonder whether this was McWhorter's choice or that of the editors of The New York Times. If the latter, it's further evidence for his thesis.


Puzzle
November 25th, 2021 (Permalink)

Thanksgiving Dinner at the New Logicians' Club

For Thanksgiving, the New Logicians' Club held a dinner party for its members. During dinner, the club played its usual truth-tellers and liars game in which every member was randomly assigned the role of a liar or a truth-teller, and was required to answer every direct question accordingly throughout the evening*.

I was seated at a table with three other members. The name tag of the one to my immediate left read "Euler". I always like to know whether I can trust what the other members say, so I asked Euler what the status of the three members was, that is, whether the three were liars or truth-tellers. However, he mumbled something inaudible because his mouth was full of turkey.

I turned to the first member to his left, whose name was "Frege", and repeated the question.

"At least one of the three of us is a liar", he replied.

I asked the same question of the last member, whose name tag read "Gödel".

"At least one of the three of us is a truth-teller", he answered.

Was what Euler said true or false?

Extra Credit: What were the first names of the three logicians?


* For previous meetings of the club, see:

  1. A Meeting of the New Logicians' Club, 5/30/2021
  2. A Second Meeting of the New Logicians' Club, 7/4/2021
  3. Halloween at the New Logicians' Club, 10/31/2021

November 23rd, 2021 (Permalink)

"Everyone is entitled to his own opinion, but not his own facts."1

As is the case for most words of philosophical importance, "fact" is both vague and ambiguous. I argued in a previous entry in this series on fact-checking2 that a fact is not a belief or a type of statement, but a situation in the world. Yet, in an earlier entry, I defined it as a "true factual statement"3. Both are possible meanings of "fact", as well as simply "true statement"4. For the rest of this entry, I'll use the word "fact" in its "state-of-affairs" sense.

"Opinion" is not so ambiguous as "fact", but it is vague. An opinion is, of course, a belief, but opinions are usually distinguished from knowledge, and the borderline between opinion and knowledge is notoriously blurry.

Given that both "fact" and "opinion" are vague and ambiguous, it's no wonder that the distinction between them is fraught. It's generally agreed that fact checkers are not expected to check statements of opinion; for instance, in a section on checking opinion pieces, The Chicago Guide to Fact-Checking states: "…[I]n general, fact-checking the writer's opinion in a piece isn't necessary as long as the opinion is based on facts.5" Nonetheless, the guide has nothing to say about the difference between an opinion and the facts that it is based on. Similarly, The Fact Checker's Bible, discussing how to check an author's work, warns: "Be careful not to check the author's opinion6", which assumes that the checker already knows how to tell the difference between opinions and what should be checked. "Deciding what to leave unchecked and deciding what to check require equal care6," it concludes, but offers little guidance as to how to do so.

Perhaps the authors of both books assumed that the difference is obvious, or that the checker will have learned it elsewhere. I don't blame them for avoiding the task of explaining it, since it's difficult, but the distinction is crucial to fact checking and, in this entry, I'll try to clarify it.

As I'm using the words in this entry, facts and opinions belong to different categories of thing: the former to the objective mind-independent world, and the latter to the subjective mental world. This is why Moynihan's statement, used as the title of this entry, is true. We all have our own opinions, in that they belong to our private mental worlds, but nobody owns the facts, which are part of the physical world we share. This is also why we are able, at least some of the time, to agree on the facts despite differences of opinion on religion, ethics, politics, and the like. These days it seems to be getting harder to reach such agreement due, in my opinion, partly to the loss of an understanding of the difference between fact and opinion, which is why this is an important issue for all of us, not just fact checkers. So, what fact-checkers need to distinguish is not facts and opinions, which are deeply different, but statements of fact and statements of opinion, which are superficially similar.

What makes a claim a statement of fact rather than one of opinion is that there is objective evidence that it is true or that it is false. For instance, if I say that peanuts are legumes, I make a factual claim; but if I say that cashews taste better than peanuts, I state an opinion. That peanuts are legumes is true because of certain objective facts about them7, but the only evidence available that cashews taste better than peanuts is the subjective evidence of taste. Cashews may taste better to me, but worse to you. If you do not like the taste of cashews but do like that of peanuts, then there is no evidence that would convince you that the former actually do taste better than the latter. So, the claim itself is an expression of opinion, and not of fact.

Both factual statements and statements of opinion can be classified into different subtypes, and the way that I've come to understand the differences between the two types of statement is by thinking about those different subtypes. So, let's look at some of them, starting with factual statements. I don't claim that the following classification is exhaustive, but these are at least some important types of statement of fact―keep in mind that such statements are not necessarily true:

You can see that what these different types of statement have in common is that there is some way in which their truth or falsity can be objectively established to the satisfaction of all reasonable inquirers. In contrast to factual statements, there is no way to establish the truth or falsity of statements of opinion to the satisfaction of all, which is part of what makes them opinion. Here are a few prominent types for comparison:

As is the case for most important philosophical distinctions, the difference between these two types of statement is not an absolute one, but one of degree. There is a continuum with logico-mathematical statements at one end and expressions of taste at the other, and all other statements fall somewhere in between.

The failure of professional fact checkers to understand this difference has led some to try checking statements of opinion with which they disagree. This is not just a waste of time, it's also a source of reputational damage. One of the charges made against them is that they are just pundits in disguise, and checking opinions is a sure way to confirm the charge. I predict that I will write a future entry in this series critiquing an example of a professional fact checker doing so, but that's just my opinion.11


Notes:

  1. This statement has a long history, but this was the wording used by Daniel Patrick Moynihan to whom it is usually credited. For the full history, see: Garson O'Toole, "People Are Entitled To Their Own Opinions But Not To Their Own Facts", Quote Investigator, 3/17/2020.
  2. What is a Fact?, 4/29/2021.
  3. Fact Vs. Opinion, 6/22/2018.
  4. Monroe Beardsley used "fact" in this sense, see his: Thinking Straight: A Guide for Readers & Writers (1950), p. 5.
  5. Brooke Borel, The Chicago Guide to Fact-Checking (2016), p. 56.
  6. Sarah Harrison Smith, The Fact Checker's Bible: A Guide to Getting it Right (2004), p. 54.
  7. Editors, "Peanut", Encyclopedia Britannica, accessed: 11/22/2021.
  8. Gödel's Incompleteness Theorem proved that there are arithmetical statements that are undecidable, that is, they can be neither proven nor disproven. These are rare types of exception to the factuality of mathematical claims. See: Melvin Henriksen, "What is Godel's Theorem?", Scientific American, 1/25/1999.
  9. Eric W. Weisstein, "Goldbach Conjecture", Wolfram MathWorld, accessed: 11/22/2021.
  10. See: "It’s Difficult to Make Predictions, Especially About the Future", Quote Investigator, 10/20/2013.
  11. I found the following article very helpful when researching this entry: John Corvino, "The Fact/Opinion Distinction", The Philosopher's Magazine, 3/4/2015. Of course, Corvino's opinion at the end is not acceptable to fact checkers, who must maintain the distinction between checkable factual statements and uncheckable statements of opinion.

New Book
November 12th, 2021 (Permalink)

First, the Bad News

Quote: "Bad News is a populist critique of American journalism. But I write this book from the Left, from a deep-seated dismay with rising inequality and the way the global economy has decimated the American working class, depriving them of the dignity of good jobs in a culture that sneers at them and their values. … Still, this is an optimistic book. Although I am deeply critical of the direction American journalism has taken, I am also convinced that it's not too late to change course. That's why the book is so tightly focused on the media powerhouses that have seized upon and further capitalized on this trend; it is they that set the tone for the rest of the industry. "1

Title: Bad News

Subtitle: How Woke Media Is Undermining Democracy

Comment: I don't like the word "woke". I assume that it started out as propaganda promoting the idea that the "woke" folks were somehow awake while the rest of us were sleeping, when the truth is the opposite. It's reminiscent of the slogan in 1984, "freedom is slavery"2, which is almost literally what the woke are trying to make the rest of us believe.

Like all euphemisms, however, "woke" seems to be losing its ability to conceal the reality it stands for. People are waking up to the fact that what is hidden behind the benign label is poison. For that reason, some of the wokeys are now declaring it the latest taboo word, at least for the rest of us3.

While in ordinary circumstances my distaste for the word and dislike of doublespeak would lead me to avoid using it, I'm going to use it at least for the rest of this entry. It seems to have lost its power to fool people, and it may irritate those trying to fool them.

Author: Batya Ungar-Sargon

Comment: I assume that this is the same Batya Ungar-Sargon whose article I recommended last month4. That article may have been an excerpt from the book, though it didn't say so. It described the ravages of the woke invasion of The New York Times (NYT), specifically, but the book appears to be more general in its treatment of the subject. In addition to writing the article and book, she's a journalist and deputy opinion editor for Newsweek. Other than the article and what I've read of the book, I'm not familiar with her work.

Date: 2021

Summary: I've only been able to read the introduction, first chapter, part of the second, and the brief epilogue. Moreover, the titles of subsequent chapters are not very revealing about their subjects. For this reason, I can't really summarize the book. However, I get the impression that its theme is how reporting went from a working-class job in the 19th and early 20th century to the profession of an upper-class, college-educated elite in the late 20th and current century. Specifically, after a mostly theoretical Introduction, the book starts with an historical chapter on the rise of the penny press in the nineteenth century, and the career of newspaperman Joseph Pulitzer, for whom the most prestigious journalism prizes are named.

This history is relevant because wokeness is mostly a disease of upper-class whites, though Ungar-Sargon didn't seem to be so clear on this fact in the article I recommended last month. My one complaint about that article was that she seemed to believe that the moral panic she described at the NYT had created a "social consensus" in favor of wokeness. However, as I showed, there is no such consensus in the country as a whole. Perhaps she meant only that there is now a consensus at the NYT, which may be true, but if so it's due at least partly to the purging of dissenters and intimidating into silence of those who remain.

Comment: My main concern about wokeness is what it is doing to the institutions that serve an essential function in a democracy. Democracy is impossible without a largely free press, and practically every day now brings a new assault on that freedom. Moreover, the press needs to be free from both censorship and propaganda, since democracy cannot work even in theory if the public is kept ignorant and misinformed. This is how wokeness is "undermining democracy", to quote the book's subtitle. The NYT and other major newspapers, while never perfect, at least made an attempt in the past at reporting the news accurately, whereas they are increasingly becoming a "Ministry of Truth" for woke propaganda. Anti-social media such as Twitter, Facebook, and YouTube are also increasingly censoring users at the instigation of woke mobs.

One thing that Orwell did not foresee in 1984 is private businesses willingly offering their services as censors and propagandists to the woke mob and government bureaucrats. As you may remember, Winston Smith worked for the Ministry of Truth, largely as a censor, rewriting old newspapers to bring them into alignment with current propaganda. The ministry was, of course, a government agency, not a giant business like the NYT or Twitter. Nonetheless, there's now a small army of Winston Smiths laboring away to produce propaganda or to censor alternative sources of information. Moreover, many of them are amateurs, providing their "services" for free because they're woke.

The good news is that, while the NYT is pretty far gone, America still has a largely free press. The NYT is, of course, worrisome because it is the most prestigious American newspaper, and sets the example for much of the rest of the news media. However, awareness of the threat to freedom of thought and speech by wokeness is itself spreading. As mentioned above, the very fact that the word "woke" is changing from a euphemism to a dysphemism is a sign that people are catching on. So, like Ungar-Sargon, I am optimistic that we can still save democracy from wokeness.

The Blurbs: The book is favorably blurbed by, among others, Greg Lukianoff of the Foundation for Individual Rights in Education, and Jonathan Haidt.

Disclaimer: This is a new book and I haven't read it yet, so can't review or recommend it. However, its topic interests me, and may also interest Fallacy Files readers.


Notes:

  1. "Introduction", pp. 16-17.
  2. Irving Howe, editor, Orwell's Nineteen Eighty-Four: Text, Sources, Criticism (1963).
  3. See, for instance: Sam Sanders, "Opinion: It's Time To Put 'Woke' To Sleep", National Public Radio, 12/30/2018. This was written almost three years ago, and Sanders was unhappy that affluent white liberals―the very people who listen to NPR―were starting to use the word.
  4. Remembering the Sokal Hoax & Another Sign of The Times, 10/29/2021.

Poll Watch
November 1st, 2021 (Updated: 11/3/2021 & 11/6/2021) (Permalink)

The Trump Effect

According to the headline of a recent article, American polling is broken. The article itself doesn't use the word "broken", but it does raise a serious problem:

Pollsters are nearly a year into battling the four-alarm fire set by their general-election disaster in 2020―the biggest national-level polling miss in nearly half a century. One year ago, Democrats rolled into Election Day confident that they would see a relatively easy Joe Biden victory―remember the closing-stretch Quinnipiac poll showing him up five in Florida and the CNN-SSRS survey with a margin of six in North Carolina, or the Morning Consult poll with Biden up nine in Pennsylvania? And, of course, there was the USC projection of a 12-point national gap. Trump, of course, won Florida and North Carolina and came perilously close in the Keystone State….1

This problem currently looms large because tomorrow is election day for the next governor of Virginia, and polling averages show the race tied: both the Five Thirty Eight (538)2 and the Real Clear Politics (RCP)3 averages show the Republican candidate Glenn Youngkin ahead of Democrat Terry McAuliffe, 538 by one percentage point, and RCP a tenth of a point less than that. Because they're not themselves sample-based polls, such averages don't have a margin of sampling error, but that doesn't mean that they're perfectly precise. Both averages correctly predicted that Biden would win last year, but each had him winning the popular vote by a greater margin than he received, which was four-and-a-half percentage points4. The 538 average showed Biden winning by over eight percentage points5, and RCP by over seven6. So, a lead of a percentage point or slightly less is clearly within the error bars for either polling average, which means that the race is too close to call.

However, both averages were off in the same direction, namely, exaggerating the vote for Biden and short-changing Trump, which suggests a bias of three to four percentage points. As Gabriel Debenedetti, the author of the article, puts it: "Across the country, pollsters seemed to systematically undercount GOP support, despite the fact that they were trying very hard, after some issues in 2016, not to do that.1" Assuming that such a bias explains last year's "errors of unusual magnitude7", and that it applies to a state's gubernatorial election, then Youngkin is not one point ahead of McAuliffe but four to five points ahead. Is this outside the error bars?

I wrote previously about two different groups studying what happened last year8, but according to Debenedetti:

…[T]he pollsters…are offering a new round of projections without ever having quite figured out what went wrong last year. … The industry set out to resolve all the ugly questions around the disaster of 2020 in the standard manner: with a big autopsy. In July, the American Association for Public Opinion Research [AAPOR] released its eagerly awaited (in the biz) report with the cooperation of a range of political and academic researchers. It first concluded that that year's national-level polling error―4.5 percentage points, on average…. And 2020's mistakes were different from 2016's, it continued:… Now…something bigger and scarier had happened. They just still couldn't conclude what, exactly, that was.

That discomfiting research wasn't the only effort of its kind. …[F]ive rival Democratic Party polling firms―including Joe Biden's―secretly joined forces two weeks after the election to find a diagnosis and solution. But when they revealed the collaboration in April, they still had no solid answers. One pollster involved in the Democratic effort told me the coalition was still deep in its experiments and didn't expect to know much at all for at least a few more months, "at a minimum."1

The studies seem to have ruled out some possible suspects, though:

The AAPOR report preempted one obvious question: No kind of interview―phone, text, online―clearly outperformed the others in terms of accuracy in 2020. It also ruled out the notions that 2020's issues were caused by anything from late-deciding voters who skewed the numbers to mis-weighted demographics when pollsters tried projecting the makeup of the electorate. It couldn't even be poll respondents' hesitance to admit they liked Trump. The Democratic groups, for their part, said they'd fixed their (relatively small) 2016 error by increasing representation for white non-college voters in their samples, and that their polling results in 2017 and 2018 races looked good―at least before 2020 spoiled that fix and revealed that their numbers were especially bad in more Republican areas of the country.1

This certainly sounds like some kind of anti-Republican bias, and only one suspect seems to remain:

The likely problem, in short, is that they simply aren't reaching a significant number of voters activated by Trump, perhaps because they don't know how to find them, or maybe because those voters mistrust and therefore ignore polls. …

"We would argue nonresponse is, by far, above and beyond, the biggest issue here," Johannes Fischer, the survey methodology lead for progressive polling outfit Data for Progress, told me this month. If nonresponse has always been a problem, its sheer scale is what's new now.1

Nonresponse bias9 arises when those who do not respond to a poll are different in some relevant way from those who do respond, which makes samples unrepresentative of the population. If the people who voted for Trump were more likely to be suspicious of polls and, therefore, less likely to respond to them than those who voted for Biden, the result would have been a bias in favor of Biden and against Trump. Ironically, such suspicions may have been a self-fulfilling prophecy: some voters thought polls were biased, then refused to participate, which resulted in biased polls.

If the polls are "broken" due to nonresponse bias, did Trump break them? Did his public attacks on polls10 cause enough of his supporters to stop responding that the polls are now unreliable? If so, then there is a simple fix:

In talking to a range of nonpartisan and party-affiliated pollsters in recent weeks, I found that many dismissed, but laughed nervously about, the least scientifically sound idea of all, which unfortunately would have looked on the surface like a fix in 2020: just artificially slapping four extra points of support on Trump's side. …

Some pollsters, such as [Patrick] Murray, have argued that the nonresponse problem appears to be Trump-specific, since the errors were far more pronounced in 2016 and 2020 than in any of the intervening or proceeding years, including the 2018 midterms. "The evidence suggests that when Trump's name is not on the ballot itself, we don't have a problem with missing a portion of the electorate that doesn't want to talk to us," he said. "The question is: Do we treat the 2020 election as something entirely brand-new, so we have to add a four-point arbitrary margin on the model for Republicans? Or do we look at when Trump has not been on the ballot and our polling has basically been okay? My working hypothesis is that's probably the better path to take, which means our 2021 polling isn't that different than what it was in 2017."1

An alternative hypothesis is that the bias is not so Trump-specific that it can't transfer from Trump to other Republican candidates, especially those he supports. Trump is supporting Youngkin in Virginia11, and McAuliffe has been portraying his Republican opponent as a Trump "wannabe"12. Meanwhile, Youngkin has been trying to put distance between himself and Trump without alienating Trump supporters13. Could Trump's support, or McAuliffe's attacks, have linked Youngkin to Trump sufficiently to have affected the response rate to recent polls? We'll have to wait till tomorrow or thereafter to find out.

I'll update this entry after the election results from Virginia are in.


Update (11/3/2021): 95% of the ballots in Virginia have been counted and the results are Youngkin at 50.68% and McAuliffe with 48.55%14, which is a difference of just over two percentage points. The Virginia Department of Elections will continue to accept absentee ballots until two days from now, for some bizarre reason, and the official results won't be certified until the fifteenth of this month, but I'm not going to wait.

A two-point win for Youngkin is not quite a confirmation of the hypothetical Republican nonresponse effect, but it is in the right direction, though it suggests that the effect is smaller than 3-4 points. However, since it's only about one point from the aggregated poll results, which is surely within the error bars for such averages, it's just as much a confirmation of the polls, and evidence against the idea that they're "broken".

Surprisingly, the election for governor of New Jersey, which also took place yesterday, is turning into a more interesting test of the nonresponse effect than Virginia. The New Jersey election didn't receive as much attention as Virginia because it was widely assumed that it would be an easy win for the incumbent Democrat, Phil Murphy. The final RCP average for the state showed Murphy ahead by 7.8 percentage points15. I can't find a polling average from 538, but the last six polls it lists all showed Murphy ahead, with a lead ranging from four to eleven points16.

Despite all that, the current election results are too close to call, with Murphy at 49.66% and his Republican opponent, Jack Ciattarelli at 49.59% with 88% of precincts reporting17. So, even if Murphy does win, the results will be such that the hypothesized nonresponse effect of 3-4 points will be too small to account for them.

So, what can we conclude from this exercise? The polls are not broken in Virginia, but they are in New Jersey? Either there is no nonresponse effect, or if there is it's much smaller than hypothesized, or perhaps much bigger?

I'll leave it to you to decide, because I'm stumped.


Update (11/6/2021): An unusual mea culpa has been issued by Patrick Murray, the director of the Monmouth University Polling Institute:

I blew it. The final Monmouth University Poll margin did not provide an accurate picture of the state of the governor’s race. … I owe an apology…because inaccurate public polling can have an impact on fundraising and voter mobilization efforts. But most of all I owe an apology to the voters of New Jersey for information that was at the very least misleading.18

I mentioned in the above Update that the polls showed the incumbent governor ahead by a margin ranging from four to eleven percentage points, the high end of which was from Monmouth19. Murray writes: "Monmouth’s conservative estimate in this year’s New Jersey race was an 8-point win for Murphy, which is still far from the final margin18." The poll predicted the outcome of the race correctly, since it appears that Murphy won by a little over two percentage points17, and I suspect that many pollsters would have simply defended that as a hit rather than apologizing. Murray's admirable forthrightness in admitting error is rare among pollsters.

Despite apologizing at the beginning of his opinion piece, Murray spends a large part of it defending himself and Monmouth's other polling results. However, I'm more interested in what he thinks caused Monmouth's large error, as well as why the polls in general were so wrong about the closeness of that race:

Election polling is…prone to its fair share of misses if you focus only on the margins. For example, Monmouth’s polls four years ago nailed the New Jersey gubernatorial race but significantly underestimated Democratic performance in the Virginia contest. This year, our final polls provided a reasonable assessment of where the Virginia race was headed but missed the spike in Republican turnout in New Jersey.

The difference between public interest polls and election polls is that the latter violates the basic principles of survey sampling. For an election poll, we do not know exactly who will vote until after Election Day, so we have to create models of what we think the electorate could look like. Those models are not perfect. They classify a sizable number of people who do not cast ballots as “likely voters” and others who actually do turn out as being “unlikely.” These models have tended to work, though, because the errors balance out into a reasonable projection of what the overall electorate eventually looks like.

Monmouth’s track record with these models…has been generally accurate within the range of error inherent in election polling. However, the growing perception that polling is broken cannot be easily dismissed.18

Murray here is raising a problem distinct from the nonresponse bias that I discussed above, namely, that the failure of Monmouth's poll may be due to its models of "likely voter". So, it needn't be the case that Trump supporters, conservatives, or Republican voters are not responding to polls, but that the pollsters judge them to be unlikely to vote and weight the results accordingly. If this hypothesis is correct, then it ought to be possible to fix the pollsters' models, at least if there is a systematic change in likelihood of voting.

As surprising as Murray's apology is, he even more surprisingly suggests that there ought to be fewer election polls. Such a suggestion may be possible because he works for a university rather than a commercial polling company:

Some organizations have decided to opt-out of election polling altogether, including the venerable Gallup Poll and the highly regarded Pew Research Center, because it distracts from the contributions of their public interest polling. Other pollsters went AWOL this year. For instance, Quinnipiac has been a fixture during New Jersey and Virginia campaigns for decades but issued no polls in either state this year.

Perhaps that is a wise move. If we cannot be certain that these polling misses are anomalies then we have a responsibility to consider whether releasing horse race numbers in close proximity to an election is making a positive or negative contribution to the political discourse.

This is especially important now because the American republic is at an inflection point. Public trust in political institutions and our fundamental democratic processes is abysmal. Honest missteps get conflated with “fake news”—a charge that has hit election polls in recent years. … If election polling only serves to feed that cynicism, then it may be time to rethink the value of issuing horse race poll numbers as the electorate prepares to vote.18

Unlike Murray, I'm not particularly worried about the effects of polling failures on the public's trust in political institutions. Such institutions in general are currently doing a terrible job, and the public should recognize that fact. People should have less trust in polls than many seem to, partly because of those failures. One advantage of having so many polls is that it encourages skepticism about polling, since it's so easy to see how widely their results range. Of course, a healthy skepticism about polling is not the same as a dismissive cynicism, and I hope that the latter is not encouraged.

However, it would probably be better if there were fewer polls, because that might lead to less "horse race" coverage. The news media sponsor most of them because covering a campaign as if it were a race is dramatic and easy. No matter what a poll shows, it's considered newsworthy. So, it's unlikely that we'll see the end of election polling, since it allows the media to manufacture news rather than just sit around waiting for something to happen. A big benefit of fewer polls would be less such lazy reporting, and perhaps more reporting on issues and checking of factual claims made by the candidates instead. One can always dream, anyway.


Notes:

  1. Gabriel Debenedetti, "Polling in America Is Still Broken. So Who Is Really Winning in Virginia?", New York Magazine, 10/28/2021.
  2. "Who's ahead in the Virginia governor's race?", Five Thirty Eight, accessed: 11/1/2021.
  3. "Virginia Governor―Youngkin vs. McAuliffe", Real Clear Politics, accessed: 11/1/2021.
  4. "Winning margins in the electoral and popular votes in United States presidential elections from 1789 to 2020", Statista, accessed: 10/30/2021.
  5. "Who's ahead in the national polls?", Five Thirty Eight, accessed: 10/30/2021.
  6. "National General Election Polls", Real Clear Politics, accessed: 10/30/2021.
  7. See: Errors of Unusual Magnitude, 7/19/2021.
  8. See the previous note and: What Biased Last Year's Polls?, 4/27/2021.
  9. Sheldon R. Gawiser & G. Evans Witt, A Journalist's Guide to Public Opinion Polls (1994), pp. 92-95.
  10. Lindsey Ellefson, "Trump Admits He Calls Polls ‘Fake’ When They Don’t Favor Him (Video)", The Wrap, 7/12/2021.
  11. Jill Colvin, "Trump Plans Last Minute Tele-Rally for Virginia's Youngkin", Associated Press, 10/28/2021.
  12. Aila Slisco, "After Larry Elder's Defeat, Terry McAuliffe Tries to Paint Glenn Youngkin as 'Trump Wannabe'", Newsweek, 9/16/2021.
  13. Darragh Roche, "Glenn Youngkin Keeps Distance From Unpopular Donald Trump in Virginia", Newsweek, 10/15/2021.
  14. "2021 November General", Virginia Department of Elections, 11/3/2021.
  15. "New Jersey Governor―Ciattarelli vs. Murphy", Real Clear Politics, accessed: 11/3/2021.
  16. "Latest Polls", Five Thirty Eight, 11/2/2021.
  17. "New Jersey Election Results", The New York Times, accessed: 11/3/2021.
  18. Patrick Murray, "Pollster: ‘I blew it.’ Maybe it’s time to get rid of election polls.", NJ, 11/5/2021.
  19. "Murphy Maintains Lead", Monmouth University Polling Institute, 10/27/2021.

Previous Month | RSS/XML | Current | Next Month