Previous Month | RSS/XML | Current
WEBLOG
November 12th, 2021 (Permalink)
First, the Bad News
Quote: "Bad News is a populist critique of American journalism. But I write this book from the Left, from a deep-seated dismay with rising inequality and the way the global economy has decimated the American working class, depriving them of the dignity of good jobs in a culture that sneers at them and their values. … Still, this is an optimistic book. Although I am deeply critical of the direction American journalism has taken, I am also convinced that it's not too late to change course. That's why the book is so tightly focused on the media powerhouses that have seized upon and further capitalized on this trend; it is they that set the tone for the rest of the industry. "1
Title: Bad News
Subtitle: How Woke Media Is Undermining Democracy
Comment: I don't like the word "woke". I assume that it started out as propaganda promoting the idea that the "woke" folks were somehow awake while the rest of us were sleeping, when the truth is the opposite. It's reminiscent of the slogan in 1984, "freedom is slavery"2, which is almost literally what the woke are trying to make the rest of us believe.
Like all euphemisms, however, "woke" seems to be losing its ability to conceal the reality it stands for. People are waking up to the fact that what is hidden behind the benign label is poison. For that reason, some of the wokeys are now declaring it the latest taboo word, at least for the rest of us3.
While in ordinary circumstances my distaste for the word and dislike of doublespeak would lead me to avoid using it, I'm going to use it at least for the rest of this entry. It seems to have lost its power to fool people, and it may irritate those trying to fool them.
Author: Batya Ungar-Sargon
Comment: I assume that this is the same Batya Ungar-Sargon whose article I recommended last month4. That article may have been an excerpt from the book, though it didn't say so. It described the ravages of the woke invasion of The New York Times (NYT), specifically, but the book appears to be more general in its treatment of the subject. In addition to writing the article and book, she's a journalist and deputy opinion editor for Newsweek. Other than the article and what I've read of the book, I'm not familiar with her work.
Date: 2021
Summary: I've only been able to read the introduction, first chapter, part of the second, and the brief epilogue. Moreover, the titles of subsequent chapters are not very revealing about their subjects. For this reason, I can't really summarize the book. However, I get the impression that its theme is how reporting went from a working-class job in the 19th and early 20th century to the profession of an upper-class, college-educated elite in the late 20th and current century. Specifically, after a mostly theoretical Introduction, the book starts with an historical chapter on the rise of the penny press in the nineteenth century, and the career of newspaperman Joseph Pulitzer, for whom the most prestigious journalism prizes are named.
This history is relevant because wokeness is mostly a disease of upper-class whites, though Ungar-Sargon didn't seem to be so clear on this fact in the article I recommended last month. My one complaint about that article was that she seemed to believe that the moral panic she described at the NYT had created a "social consensus" in favor of wokeness. However, as I showed, there is no such consensus in the country as a whole. Perhaps she meant only that there is now a consensus at the NYT, which may be true, but if so it's due at least partly to the purging of dissenters and intimidating into silence of those who remain.
Comment: My main concern about wokeness is what it is doing to the institutions that serve an essential function in a democracy. Democracy is impossible without a largely free press, and practically every day now brings a new assault on that freedom. Moreover, the press needs to be free from both censorship and propaganda, since democracy cannot work even in theory if the public is kept ignorant and misinformed. This is how wokeness is "undermining democracy", to quote the book's subtitle. The NYT and other major newspapers, while never perfect, at least made an attempt in the past at reporting the news accurately, whereas they are increasingly becoming a "Ministry of Truth" for woke propaganda. Anti-social media such as Twitter, Facebook, and YouTube are also increasingly censoring users at the instigation of woke mobs.
One thing that Orwell did not foresee in 1984 is private businesses willingly offering their services as censors and propagandists to the woke mob and government bureaucrats. As you may remember, Winston Smith worked for the Ministry of Truth, largely as a censor, rewriting old newspapers to bring them into alignment with current propaganda. The ministry was, of course, a government agency, not a giant business like the NYT or Twitter. Nonetheless, there's now a small army of Winston Smiths laboring away to produce propaganda or to censor alternative sources of information. Moreover, many of them are amateurs, providing their "services" for free because they're woke.
The good news is that, while the NYT is pretty far gone, America still has a largely free press. The NYT is, of course, worrisome because it is the most prestigious American newspaper, and sets the example for much of the rest of the news media. However, awareness of the threat to freedom of thought and speech by wokeness is itself spreading. As mentioned above, the very fact that the word "woke" is changing from a euphemism to a dysphemism is a sign that people are catching on. So, like Ungar-Sargon, I am optimistic that we can still save democracy from wokeness.
The Blurbs: The book is favorably blurbed by, among others, Greg Lukianoff of the Foundation for Individual Rights in Education, and Jonathan Haidt.
Disclaimer: This is a new book and I haven't read it yet, so can't review or recommend it. However, its topic interests me, and may also interest Fallacy Files readers.
Notes:
- "Introduction", pp. 16-17.
- Irving Howe, editor, Orwell's Nineteen Eighty-Four: Text, Sources, Criticism (1963).
- See, for instance: Sam Sanders, "Opinion: It's Time To Put 'Woke' To Sleep", National Public Radio, 12/30/2018. This was written almost three years ago, and Sanders was unhappy that affluent white liberals―the very people who listen to NPR―were starting to use the word.
- Remembering the Sokal Hoax & Another Sign of The Times, 10/29/2021.
November 1st, 2021 (Updated: 11/3/2021 & 11/6/2021) (Permalink)
The Trump Effect
According to the headline of a recent article, American polling is broken. The article itself doesn't use the word "broken", but it does raise a serious problem:
Pollsters are nearly a year into battling the four-alarm fire set by their general-election disaster in 2020―the biggest national-level polling miss in nearly half a century. One year ago, Democrats rolled into Election Day confident that they would see a relatively easy Joe Biden victory―remember the closing-stretch Quinnipiac poll showing him up five in Florida and the CNN-SSRS survey with a margin of six in North Carolina, or the Morning Consult poll with Biden up nine in Pennsylvania? And, of course, there was the USC projection of a 12-point national gap. Trump, of course, won Florida and North Carolina and came perilously close in the Keystone State….1
This problem currently looms large because tomorrow is election day for the next governor of Virginia, and polling averages show the race tied: both the Five Thirty Eight (538)2 and the Real Clear Politics (RCP)3 averages show the Republican candidate Glenn Youngkin ahead of Democrat Terry McAuliffe, 538 by one percentage point, and RCP a tenth of a point less than that. Because they're not themselves sample-based polls, such averages don't have a margin of sampling error, but that doesn't mean that they're perfectly precise. Both averages correctly predicted that Biden would win last year, but each had him winning the popular vote by a greater margin than he received, which was four-and-a-half percentage points4. The 538 average showed Biden winning by over eight percentage points5, and RCP by over seven6. So, a lead of a percentage point or slightly less is clearly within the error bars for either polling average, which means that the race is too close to call.
However, both averages were off in the same direction, namely, exaggerating the vote for Biden and short-changing Trump, which suggests a bias of three to four percentage points. As Gabriel Debenedetti, the author of the article, puts it: "Across the country, pollsters seemed to systematically undercount GOP support, despite the fact that they were trying very hard, after some issues in 2016, not to do that.1" Assuming that such a bias explains last year's "errors of unusual magnitude7", and that it applies to a state's gubernatorial election, then Youngkin is not one point ahead of McAuliffe but four to five points ahead. Is this outside the error bars?
I wrote previously about two different groups studying what happened last year8, but according to Debenedetti:
…[T]he pollsters…are offering a new round of projections without ever having quite figured out what went wrong last year. … The industry set out to resolve all the ugly questions around the disaster of 2020 in the standard manner: with a big autopsy. In July, the American Association for Public Opinion Research [AAPOR] released its eagerly awaited (in the biz) report with the cooperation of a range of political and academic researchers. It first concluded that that year's national-level polling error―4.5 percentage points, on average…. And 2020's mistakes were different from 2016's, it continued:… Now…something bigger and scarier had happened. They just still couldn't conclude what, exactly, that was.That discomfiting research wasn't the only effort of its kind. …[F]ive rival Democratic Party polling firms―including Joe Biden's―secretly joined forces two weeks after the election to find a diagnosis and solution. But when they revealed the collaboration in April, they still had no solid answers. One pollster involved in the Democratic effort told me the coalition was still deep in its experiments and didn't expect to know much at all for at least a few more months, "at a minimum."1
The studies seem to have ruled out some possible suspects, though:
The AAPOR report preempted one obvious question: No kind of interview―phone, text, online―clearly outperformed the others in terms of accuracy in 2020. It also ruled out the notions that 2020's issues were caused by anything from late-deciding voters who skewed the numbers to mis-weighted demographics when pollsters tried projecting the makeup of the electorate. It couldn't even be poll respondents' hesitance to admit they liked Trump. The Democratic groups, for their part, said they'd fixed their (relatively small) 2016 error by increasing representation for white non-college voters in their samples, and that their polling results in 2017 and 2018 races looked good―at least before 2020 spoiled that fix and revealed that their numbers were especially bad in more Republican areas of the country.1
This certainly sounds like some kind of anti-Republican bias, and only one suspect seems to remain:
The likely problem, in short, is that they simply aren't reaching a significant number of voters activated by Trump, perhaps because they don't know how to find them, or maybe because those voters mistrust and therefore ignore polls. …"We would argue nonresponse is, by far, above and beyond, the biggest issue here," Johannes Fischer, the survey methodology lead for progressive polling outfit Data for Progress, told me this month. If nonresponse has always been a problem, its sheer scale is what's new now.1
Nonresponse bias9 arises when those who do not respond to a poll are different in some relevant way from those who do respond, which makes samples unrepresentative of the population. If the people who voted for Trump were more likely to be suspicious of polls and, therefore, less likely to respond to them than those who voted for Biden, the result would have been a bias in favor of Biden and against Trump. Ironically, such suspicions may have been a self-fulfilling prophecy: some voters thought polls were biased, then refused to participate, which resulted in biased polls.
If the polls are "broken" due to nonresponse bias, did Trump break them? Did his public attacks on polls10 cause enough of his supporters to stop responding that the polls are now unreliable? If so, then there is a simple fix:
In talking to a range of nonpartisan and party-affiliated pollsters in recent weeks, I found that many dismissed, but laughed nervously about, the least scientifically sound idea of all, which unfortunately would have looked on the surface like a fix in 2020: just artificially slapping four extra points of support on Trump's side. …Some pollsters, such as [Patrick] Murray, have argued that the nonresponse problem appears to be Trump-specific, since the errors were far more pronounced in 2016 and 2020 than in any of the intervening or proceeding years, including the 2018 midterms. "The evidence suggests that when Trump's name is not on the ballot itself, we don't have a problem with missing a portion of the electorate that doesn't want to talk to us," he said. "The question is: Do we treat the 2020 election as something entirely brand-new, so we have to add a four-point arbitrary margin on the model for Republicans? Or do we look at when Trump has not been on the ballot and our polling has basically been okay? My working hypothesis is that's probably the better path to take, which means our 2021 polling isn't that different than what it was in 2017."1
An alternative hypothesis is that the bias is not so Trump-specific that it can't transfer from Trump to other Republican candidates, especially those he supports. Trump is supporting Youngkin in Virginia11, and McAuliffe has been portraying his Republican opponent as a Trump "wannabe"12. Meanwhile, Youngkin has been trying to put distance between himself and Trump without alienating Trump supporters13. Could Trump's support, or McAuliffe's attacks, have linked Youngkin to Trump sufficiently to have affected the response rate to recent polls? We'll have to wait till tomorrow or thereafter to find out.
I'll update this entry after the election results from Virginia are in.
Update (11/3/2021): 95% of the ballots in Virginia have been counted and the results are Youngkin at 50.68% and McAuliffe with 48.55%14, which is a difference of just over two percentage points. The Virginia Department of Elections will continue to accept absentee ballots until two days from now, for some bizarre reason, and the official results won't be certified until the fifteenth of this month, but I'm not going to wait.
A two-point win for Youngkin is not quite a confirmation of the hypothetical Republican nonresponse effect, but it is in the right direction, though it suggests that the effect is smaller than 3-4 points. However, since it's only about one point from the aggregated poll results, which is surely within the error bars for such averages, it's just as much a confirmation of the polls, and evidence against the idea that they're "broken".
Surprisingly, the election for governor of New Jersey, which also took place yesterday, is turning into a more interesting test of the nonresponse effect than Virginia. The New Jersey election didn't receive as much attention as Virginia because it was widely assumed that it would be an easy win for the incumbent Democrat, Phil Murphy. The final RCP average for the state showed Murphy ahead by 7.8 percentage points15. I can't find a polling average from 538, but the last six polls it lists all showed Murphy ahead, with a lead ranging from four to eleven points16.
Despite all that, the current election results are too close to call, with Murphy at 49.66% and his Republican opponent, Jack Ciattarelli at 49.59% with 88% of precincts reporting17. So, even if Murphy does win, the results will be such that the hypothesized nonresponse effect of 3-4 points will be too small to account for them.
So, what can we conclude from this exercise? The polls are not broken in Virginia, but they are in New Jersey? Either there is no nonresponse effect, or if there is it's much smaller than hypothesized, or perhaps much bigger?
I'll leave it to you to decide, because I'm stumped.
Update (11/6/2021): An unusual mea culpa has been issued by Patrick Murray, the director of the Monmouth University Polling Institute:
I blew it. The final Monmouth University Poll margin did not provide an accurate picture of the state of the governor’s race. … I owe an apology…because inaccurate public polling can have an impact on fundraising and voter mobilization efforts. But most of all I owe an apology to the voters of New Jersey for information that was at the very least misleading.18
I mentioned in the above Update that the polls showed the incumbent governor ahead by a margin ranging from four to eleven percentage points, the high end of which was from Monmouth19. Murray writes: "Monmouth’s conservative estimate in this year’s New Jersey race was an 8-point win for Murphy, which is still far from the final margin18." The poll predicted the outcome of the race correctly, since it appears that Murphy won by a little over two percentage points17, and I suspect that many pollsters would have simply defended that as a hit rather than apologizing. Murray's admirable forthrightness in admitting error is rare among pollsters.
Despite apologizing at the beginning of his opinion piece, Murray spends a large part of it defending himself and Monmouth's other polling results. However, I'm more interested in what he thinks caused Monmouth's large error, as well as why the polls in general were so wrong about the closeness of that race:
Election polling is…prone to its fair share of misses if you focus only on the margins. For example, Monmouth’s polls four years ago nailed the New Jersey gubernatorial race but significantly underestimated Democratic performance in the Virginia contest. This year, our final polls provided a reasonable assessment of where the Virginia race was headed but missed the spike in Republican turnout in New Jersey.The difference between public interest polls and election polls is that the latter violates the basic principles of survey sampling. For an election poll, we do not know exactly who will vote until after Election Day, so we have to create models of what we think the electorate could look like. Those models are not perfect. They classify a sizable number of people who do not cast ballots as “likely voters” and others who actually do turn out as being “unlikely.” These models have tended to work, though, because the errors balance out into a reasonable projection of what the overall electorate eventually looks like.
Monmouth’s track record with these models…has been generally accurate within the range of error inherent in election polling. However, the growing perception that polling is broken cannot be easily dismissed.18
Murray here is raising a problem distinct from the nonresponse bias that I discussed above, namely, that the failure of Monmouth's poll may be due to its models of "likely voter". So, it needn't be the case that Trump supporters, conservatives, or Republican voters are not responding to polls, but that the pollsters judge them to be unlikely to vote and weight the results accordingly. If this hypothesis is correct, then it ought to be possible to fix the pollsters' models, at least if there is a systematic change in likelihood of voting.
As surprising as Murray's apology is, he even more surprisingly suggests that there ought to be fewer election polls. Such a suggestion may be possible because he works for a university rather than a commercial polling company:
Some organizations have decided to opt-out of election polling altogether, including the venerable Gallup Poll and the highly regarded Pew Research Center, because it distracts from the contributions of their public interest polling. Other pollsters went AWOL this year. For instance, Quinnipiac has been a fixture during New Jersey and Virginia campaigns for decades but issued no polls in either state this year.Perhaps that is a wise move. If we cannot be certain that these polling misses are anomalies then we have a responsibility to consider whether releasing horse race numbers in close proximity to an election is making a positive or negative contribution to the political discourse.
This is especially important now because the American republic is at an inflection point. Public trust in political institutions and our fundamental democratic processes is abysmal. Honest missteps get conflated with “fake news”—a charge that has hit election polls in recent years. … If election polling only serves to feed that cynicism, then it may be time to rethink the value of issuing horse race poll numbers as the electorate prepares to vote.18
Unlike Murray, I'm not particularly worried about the effects of polling failures on the public's trust in political institutions. Such institutions in general are currently doing a terrible job, and the public should recognize that fact. People should have less trust in polls than many seem to, partly because of those failures. One advantage of having so many polls is that it encourages skepticism about polling, since it's so easy to see how widely their results range. Of course, a healthy skepticism about polling is not the same as a dismissive cynicism, and I hope that the latter is not encouraged.
However, it would probably be better if there were fewer polls, because that might lead to less "horse race" coverage. The news media sponsor most of them because covering a campaign as if it were a race is dramatic and easy. No matter what a poll shows, it's considered newsworthy. So, it's unlikely that we'll see the end of election polling, since it allows the media to manufacture news rather than just sit around waiting for something to happen. A big benefit of fewer polls would be less such lazy reporting, and perhaps more reporting on issues and checking of factual claims made by the candidates instead. One can always dream, anyway.
Notes:
- Gabriel Debenedetti, "Polling in America Is Still Broken. So Who Is Really Winning in Virginia?", New York Magazine, 10/28/2021.
- "Who's ahead in the Virginia governor's race?", Five Thirty Eight, accessed: 11/1/2021.
- "Virginia Governor―Youngkin vs. McAuliffe", Real Clear Politics, accessed: 11/1/2021.
- "Winning margins in the electoral and popular votes in United States presidential elections from 1789 to 2020", Statista, accessed: 10/30/2021.
- "Who's ahead in the national polls?", Five Thirty Eight, accessed: 10/30/2021.
- "National General Election Polls", Real Clear Politics, accessed: 10/30/2021.
- See: Errors of Unusual Magnitude, 7/19/2021.
- See the previous note and: What Biased Last Year's Polls?, 4/27/2021.
- Sheldon R. Gawiser & G. Evans Witt, A Journalist's Guide to Public Opinion Polls (1994), pp. 92-95.
- Lindsey Ellefson, "Trump Admits He Calls Polls ‘Fake’ When They Don’t Favor Him (Video)", The Wrap, 7/12/2021.
- Jill Colvin, "Trump Plans Last Minute Tele-Rally for Virginia's Youngkin", Associated Press, 10/28/2021.
- Aila Slisco, "After Larry Elder's Defeat, Terry McAuliffe Tries to Paint Glenn Youngkin as 'Trump Wannabe'", Newsweek, 9/16/2021.
- Darragh Roche, "Glenn Youngkin Keeps Distance From Unpopular Donald Trump in Virginia", Newsweek, 10/15/2021.
- "2021 November General", Virginia Department of Elections, 11/3/2021.
- "New Jersey Governor―Ciattarelli vs. Murphy", Real Clear Politics, accessed: 11/3/2021.
- "Latest Polls", Five Thirty Eight, 11/2/2021.
- "New Jersey Election Results", The New York Times, accessed: 11/3/2021.
- Patrick Murray, "Pollster: ‘I blew it.’ Maybe it’s time to get rid of election polls.", NJ, 11/5/2021.
- "Murphy Maintains Lead", Monmouth University Polling Institute, 10/27/2021.
![]()
Most online slot players have heard of the gamblers fallacy but we would suggest you simply do your homework before you play in order limit your risk. Sites like SlotsOnlineCanada are the go-to Canadian online slots portal on everything from new slot bonuses, slot game reviews and up-to-date news on the iGaming industry.
You will never be able to dispel the truth and reasoning behind the gamblers fallacy, however if you read these winning insights on pokies you may find that you gain a slight upper hand.
CryptoCasinos provides bitcoin casino guides for crypto interested gamblers!
If you want to play casino for free, you should check out freespinsnodeposituk.com for a complete list of casinos.
If you are a player looking for new Canadian online casinos to play at, check out https://livecasinoonline.ca/new-casinos - the most authoritative guide to new casinos in Canada. Only safe and licensed operators.
Casino Bonuses are not easy to find on the internet. There are simply too many and their terms and conditions makes them difficult to compare. You can find the best bonuses at casinopilot.
Don’t waste your time looking for worthy new online casinos, as https://newcasinouk.com/ already did all the hard work for you. Check out top lists with latest casinos on the market and register an account today.
You can find the best casinos at MrCasinova.com as this website update online casinos and compare them on daily basis.
October 31st, 2021 (Permalink)
Halloween at the New Logicians' Club
For once, I arrived on time for the New Logicians' Club's annual Halloween party. Since it was a Halloween party, all the members were designated by lot as either vampires or vampire-hunters. As each member entered the room where the party was held, he or she reached into a bag and pulled out a card which read either "vampire" or "vampire-hunter". The vampires, of course, always lied and the hunters always told the truth. Thankfully, I drew a vampire-hunter card, so I can truthfully tell you what happened.
I was seated at a table with three other members whom I didn't know. "Hello, I'm a vampire-hunter," I introduced myself, "what are you three?"
The member seated to my left, whose name tag read "Boris", replied: "We three are all vampires."
The member to his left, whose name appeared to be "Bela", mumbled something inaudible through a mouthful of food.
"What did he say?" I asked the remaining member, whose name was "Lon".
"He said: 'One and only one of us three is a vampire-hunter'", Lon answered.
Fortunately, I was able to determine from this exchange which of the three members at the table were vampires and which were hunters. Can you?
Extra Credit: What were the last names of the three other members at the table?
Boris and Lon were vampires; the only vampire-hunter at the table, other than me, was Bela.
Explanation: Boris claimed that all three of them were vampires. If that were true then Boris himself would be a vampire and, therefore, lying; but if he were lying, then it would not be true. Therefore, it is not true and he is lying, so he is in fact a vampire, but at least one of the other two is a vampire-hunter.
Could both of the other two be vampire-hunters? Lon claimed that Bela said that one and only one of them was a vampire-hunter. So, if both were hunters then what Lon said was true and Bela really did say that. However, if both were hunters, then what Bela said is false and he would be a vampire. Therefore, only one of the three is a hunter.
Could Bela be a vampire and Lon a hunter? If so, then Lon was right when he said that Bela claimed that one and only one of them is a hunter. However, we've concluded that one and only one of them is hunter, so what Bela said was true. So, Bela would have to be a hunter.
The only possibility left is that Bela is a hunter and Lon is a vampire. So, what Lon said that Bela said is true, namely, that one and only one of them is a hunter. However, Lon was lying when he said that! So, even though it was true, Bela didn't say it. What did Bela say? Sadly, that's a puzzle we'll never solve.
Extra Credit Solution: The full names of the three were: Boris Karloff, Bela Lugosi, and Lon Chaney. I'm not sure whether Chaney was Junior or Senior.
Disclaimer & Disclosure: This puzzle is a work of fiction. The names were changed to protect the vampires.
October 29th, 2021 (Permalink)
Remembering the Sokal Hoax & Another Sign of The Times
- James B. Meigs, "How Alan Sokal Won the Battle but Lost the 'Science Wars'", Commentary, 11/2021
It was the greatest emperor's-new-clothes gag in modern intellectual history. Physicist Alan Sokal's famous hoax article―a putative attack on the legitimacy of science and even on the notion of "objectivity" itself―appeared in the trendy academic journal Social Text in the spring of 1996. With its precise mimicry of postmodern language and ideas, Sokal's parody worked like a laser scalpel, mercilessly exposing the movement's incoherence and foolishness. Even the paper's title―"Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity"―perfectly captured the Olympian pretentiousness of the field. And the journal's editors fell for it. Hard.
A few weeks after the paper was published, Sokal revealed the truth: He'd come to bury postmodernism, not to praise it. His stunt, now universally known as the "Sokal Hoax," proved that the editors of the most prestigious postmodern journal in America couldn't tell the difference between an actual work of scholarship and a vicious satire intended to make them look silly. Even 25 years later, Sokal's paper remains stunningly funny and audacious; every word is a delight. But reading it today is also disquieting. The academic absurdities that Sokal punctured with surgical precision no longer strike one as particularly outré. If anything, they are now commonplace.
The idea that science is just one of many equally valid "ways of knowing," that Western rationalism is ideologically corrupt, that "your truth" is largely determined by your gender or the color of your skin…influence the views of ordinary Americans about everything from our own history to the safety of vaccines. …
Sokal toiled on his manuscript for months. "I had to revise and revise until it reached the desired level of unclarity," he said. Meanwhile, the editors of Social Text were planning a special issue, intended to be a resounding rebuttal to the criticisms lodged by…scientists. Though Sokal wasn't aware of the project at the time, his faux paper fit their "Science Wars" issue like a skeleton key in a padlock. … They wanted to put those quibbling scientists in their place. And here came a real scientist―an expert in quantum mechanics, no less!―telling them the postmodernists had been right all along. It was (literally) too good to be true. …
In plain English, Sokal's essay says that science as most of us conceive it is a scam. …
Looking back 25 years later, some might see Sokal's hoax as an exercise in shooting some inconsequential fish in a very small barrel. Did it really matter if some Marxist professors were advancing ridiculous ideas in a few elite universities? Sure, postmodernism, critical studies, and various related schools of thought were challenging core elements of the Enlightenment tradition: the aspiration toward objectivity, the dedication to rationalism, the primacy of the individual. But haven't universities always been places where young people are exposed to a range of ideas? What's the harm in learning about some radical views? Won't most students leave all this behind when they graduate and start making their way in the real world? Thus did many on the mainstream left shrug off the warning that Sokal had delivered.
A radical mindset was chipping at the pillars of rational inquiry and democratic values. Yet those ideas received surprisingly little in the way of vigorous academic counterargument. …
After the Sokal Squared stunt was revealed, Harvard psychologist Steven Pinker asked, "Is there any idea so outlandish that it won't be published in a Critical/PoMo/Identity/'Theory' journal?" The answer, apparently, is no. It doesn’t seem that any amount of ridicule can slow the left's ideological juggernaut. And, unlike in the days of the Sokal Hoax, the main arena for anti-rationalist thinking is no longer just the elite academy. The anti-Enlightenment ideas cooked up over the decades in trendy journals and in departments of literature and sociology have now escaped the lab. They are self-replicating and circulating freely in our society.
"There is no objective, neutral reality," Robin DiAngelo writes in her bestselling White Fragility. In fact, she sees that claim as so self-evident that it doesn't even require an explanation or defense. The New York Times' "1619 Project" treats American history not as a set of facts to be weighed, but as a text, one whose true meaning is open to radical reinterpretation in the hands of critical theorists. "Anti-racist" training materials urge us to reject the culture of white supremacy, which includes dangerous ideas such as "the belief that there is such a thing as being 'objective,'" or the notion that "linear thinking" and "logic" are desirable ways to understand the world. And on and on.
When we look at the collapse of rationality all around us, it seems that while Alan Sokal might have won his battle with postmodern lunacy, he ultimately lost the war. Sokal wrapped up his 1996 hoax essay with a resounding call to action, a campaign that "must start with the younger generation." One hears a faint echo of China's Cultural Revolution in his urgent admonition: "The teaching of science and mathematics must be purged of its authoritarian and elitist characteristics, and the content of these subjects enriched by incorporating the insights of feminist, queer, multiculturalist, and ecological critiques." Sokal meant his essay as a parodic warning. Twenty-five years later, it appears that the Sokal Hoax was actually an instruction manual.
The following lengthy article about the continuing decline and fall of The New York Times (NYT) is worth reading as a whole, especially if you're not familiar with many of the signposts along the way: Bari Weiss, Tom Cotton, James Bennet, Donald McNeil, and so on. I've edited out the details, so if you're not familiar with these names, read the whole thing.
- Batya Ungar-Sargon, "Sign of the Times", Spectator World, 10/18/2021
In 2018, … the Data Science Group at the Times launched a project to understand and predict the emotional impact of the paper's articles. They asked 1,200 readers to rate their emotional responses to articles, with options including boredom, hate, interest, fear, hope, love and happiness. These readers were young and well-educated―the target audience of many advertisers.
What the group found was perhaps not surprising: emotions drive engagement. 'Across the board, articles that were top in emotional categories, such as love, sadness and fear, performed significantly better than articles that were not,' the team reported. To monetize the insight, the Data Science Group created an artificial intelligence machine-learning algorithm to predict which emotions articles would evoke. The Times now sells this insight to advertisers, who can choose from 18 emotions, seven motivations and 100 topics they want readers to feel or think about when they encounter an ad.
'By identifying connections between content and emotion, we've successfully driven ad engagement 6X more effectively than…benchmarks,' the Times's Advertising website proudly declares. 'Brands can target ads to specific articles we predict will evoke particular emotions in our readers,' it pitches. 'Brands have the opportunity to target ads to articles we predict will motivate our readers to take a particular action.' As of April 2019, Project Feels had generated 50 ad campaigns, more than 30 million impressions, and strong revenue results.
No wonder the NYT and other news outlets continue to hype fear of COVID-19; this is how they sell ads. We saw in an earlier Recommended Reading that emotions, especially hate, are also the ingredients in the "secret sauce" behind the NYT's current success in selling subscriptions2.
If you want to know what makes America's educated liberal elites emotional, you only have to open the Times. Judging by the coverage of recent years, two things make them more emotional than anything else: Trump and racism. … [L]iberal news media, increasingly reliant on digital advertising, subscriptions and memberships, have been mainstreaming an obsession with race, to the approval of their affluent readers. And what was once a business model built on a culture war has over the past few years devolved into a full-blown moral panic.
Any journalist working in the mainstream American press knows this, because the moral panic is enforced on social media in brutal shaming campaigns. They have happened to many journalists, but you don't actually have to weed out every heretic to silence dissent. After a while, people silence themselves. Who would volunteer to be humiliated by thousands of strangers, when they could avoid it by staying quiet? The spectacle alone enforces compliance.
Once upon a time, telling the truth 'without fear or favor' was the job description of a New York Times journalist. Today, doing the job that way could very well cost a journalist his or her job. The people who are supposed to be in charge of the nation's most august publications now routinely capitulate to the demands of the Twitter mob. … It is now normal for editors at legacy publications to capitulate to outrage not only from their readers, but from their own staff. That's what's so shocking about this censorious development in American journalism. It's not that online activists would try to use their power to enforce their views. It's that older journalists―people who should, who do, know better―now surrender to the pressure. …
…A moral panic…is a form of mass hysteria that happens when people come to believe that some hostile force threatens their values and safety. But it requires some level of consensus about the evil represented by the hostile force. …[T]he media have always played a key role in moral panics by invention, exaggeration and distortion.
This bears repeating: there can be no moral panic without the media and the social consensus they create. The power of the press―despite its unpopularity―is still immense. And it has used that power over the past decade, and with exponential intensity over the past few years, to wage a culture war on its own behalf, notably by creating a moral panic around racism.
There is no "social consensus" in favor of moral panic. For instance, two-thirds of Americans surveyed, including half of blacks, thought that the NYT should publish opinion pieces such as the one by Senator Tom Cotton that caused such a ruckus3. Moreover, a slightly smaller majority of registered voters agreed with Cotton's opinion in that piece, including more than a third of blacks4. This is not to say that Cotton was right, but that it's ridiculous to act as if he expressed some sort of extreme view when it was the view of the majority of Americans at the time. Cotton's view is the "social consensus", not the NYT and other news media's moral panic.
Nor is it surprising that the New York Times played an outsized role in shaping our moral panic. Its business model is deeply bound up with the mores of affluent white liberals. Inevitably, in the spring of 2020, it turned its wrath on its own. By the time the dust settled, five people would no longer work at the Times. …
The harm is…to the public sphere and the journalists whose job requires they have the humility to submit to the pursuit of fairness and truth. It's public debate that bears the brunt of the damage. We are being denied the chance to hash out a controversy rather than hide from it.
These values are crucial not just to journalism but to democracy and to freedom. They used to be the values of the New York Times. Not anymore. … As the Twitter mob pursues small infractions as avidly as it does large ones, and as the etiquette keeps shifting, who dares trust their own ability to judge right from wrong?
It's how you know we're in a moral panic: only the mob has the right to judge you. And too many journalists have ceded them that right. Indeed, a huge number of the mob are journalists―journalists from the most important newspapers in the country and the world…. People who had been hired to think for themselves now mindlessly repeat a dogma like their jobs depended on it.
Well, they do.
The following article discusses the Sokal hoax, which I mentioned in last month's Recommended Readings1, and the "Sokal Squared" hoaxes of more recent years.
Notes:
, 9/30/2021.Disclaimer: I don't necessarily agree with everything in the articles, above, but I think they're worth reading as a whole. In abridging excerpts, I sometimes change the paragraphing and rearrange their order so as to emphasize points.
October 22nd, 2021 (Permalink)
What is rationality, and why are people saying terrible things about it?
Quote: "Rationality ought to be the lodestar for everything we think and do. (If you disagree, are your objections rational?) Yet in an era blessed with unprecedented resources for reasoning, the public sphere is infested with fake news, quack cures, conspiracy theories, and 'post-truth' rhetoric. … Many act as if rationality is obsolete―as if the point of argumentation is to discredit one's adversaries rather than collectively reason our way to the most defensible beliefs. In an era in which rationality seems both more threatened and more essential than ever, Rationality is, above all, an affirmation of rationality."1
Title: Rationality
Subtitle: What It Is, Why It Seems Scarce, Why It Matters
Comment: The subtitle is made up of three distinct questions that are addressed in the book. The first and third questions are philosophical, whereas the second is sociological, or perhaps political. The first question is an obvious one to ask, if not to answer, but the others are more surprising. That the book would ask the second question is a sign of the times, since it assumes that rationality really does seem scarce―I'm not suggesting it doesn't, as it certainly seems so to me―but I'm not sure that it's any scarcer now than it used to be. The third question is also a sign of the times: is it really necessary to explain why rationality matters? Perhaps the fact that some people today don't seem to understand why is part of the reason it seems so scarce.
Author: Steven Pinker
Comment: Pinker is a psychologist―or, to use the modern jargon, a "cognitive scientist"―but he's the author of one of the best books of philosophy I've ever read, namely, The Blank Slate2. As I mentioned in the comment on the subtitle questions, the first and third questions are philosophical, so Pinker's previous book makes me eager to read this new one.
Summary: The first chapter, "How rational an animal?", is the only one I've read in full, thanks to an online sample. It deals with two topics: first, arguing that human beings are, indeed, rational animals, and explaining why. However, its second part gives examples of ways we can also be irrational animals, some of which may be familiar to Fallacy Files readers: the Cognitive Reflection Test3, the Wason card test4, and the Monty Hall problem5. Pinker also gives a version of the "Linda" problem; if you're not familiar with it, try the following one―it's mine, so don't blame Pinker:
Lynnda is a 28-year-old vegan whose preferred pronoun is "s/he". S/he attended Evergreen State College and majored in Intersexional Studies. S/he has four tattoos and lives with three cats. Which of the following is most probable?
- S/he is a real estate agent.
- S/he is a barista at a local coffee house.
- S/he bowls in a local bowling league when not working as a real estate agent.
- S/he attends protests against police violence when not working as a barista at a local coffee house.
- S/he voted for Donald Trump in the last election.
If you thought 4 was the most probable option, you're normal―wrong, but normal. 2 is more probable than 4. Also, 1 is more probable than 3, for the same reason. If you thought 5 was the most probable, see a doctor. If you want to know why all this is the case, see the entry for the Conjunction Fallacy or read the book.
Judging mostly from their titles, subjects of the remaining chapters are: logic, probability theory―Bayes' theorem gets its own chapter, as it well should―rational choice theory, statistical decision theory, game theory, correlation and causation, and the final chapter appears to be the one that addresses the question of why rationality matters. This may sound like pretty heavyweight material, but Pinker is very good at explaining difficult matters in a comprehensible way.
Comment: I don't need any convincing that rationality matters, but I'm skeptical about the value of trying to convince those who are not already convinced. How are you supposed to do that? By appealing to their reason? If they don't trust reason, how can that work? You might as well try to stand up by pulling on your own hair.
However, there may be some who have heard the attacks on rationality, are confused by them, and may benefit from understanding what rationality is, why it is important, and why those attacks are fallacious. You can't reason directly to the conclusion that reason works―that's obviously circular―but you can reason indirectly that the arguments against it are not cogent. Once the confusing nonsense is swept away, natural rationality should do the rest. People are rational animals, and logical fallacies and cognitive illusions only show that we are not perfectly so. Also, the more we learn about the mistakes we make, the better we can learn to avoid those mistakes, becoming more rational in the process.
The Blurbs: The book has a strange endorsement from Jonathan Haidt: "If you've ever considered taking drugs to make yourself smarter, read Rationality instead." How many people is the antecedent true of? Is that a big potential readership? Will reading the book actually make you smarter, or will it make it obvious that it's irrational to take drugs for that purpose?
Date: 2021
Disclaimer: This is a new book and I haven't read it yet, so can't review or recommend it. However, its topic interests me, and may also interest Fallacy Files readers. The problem is a work of fiction, and any resemblance of Lynnda to persons living or dead is totally coincidental and distinctly unfortunate.
Notes:
- "Preface".
- Steven Pinker, The Blank Slate: The Modern Denial of Human Nature (2002).
- Versions of two of the three problems that make up the test are given here: I'm with Stupid, 6/21/2012. Here's a link to the article discussed in the entry: Jonah Lehrer, "Why Smart People Are Stupid", The New Yorker, 6/12/2012.
- I mentioned the Wason test in passing here: Are you intelligent but irrational?, 11/11/2009.
- I mentioned the Monty Hall problem in passing here: Playing with Your Mind, 9/21/2021.
October 16th, 2021 (Updated & Corrected) (Permalink)
Fact Checks, Vast Majorities, and Outright Falsehoods
The ratings systems of professional fact-checking groups often come in degrees1. For instance, PolitiFact's "Truth-O-Meter" has six ratings: True, Mostly True, Half True, Mostly False, False, and Pants on Fire!2 Similarly, The Washington Post's Fact Checker uses a system of five symbols: a Geppetto Checkmark for "claims that contain 'the truth, the whole truth, and nothing but the truth'", and one to four "Pinocchios" for various degrees of falsehood3.
In the previous entry in this series4, I criticized some of the professional fact-checking groups for using ratings―such as "Pinocchios" and "Pants on Fire!"―that suggest those so rated were lying. In this entry, I examine a different problem with such systems, namely, that they treat truth and falsity as if they come in degrees.
Fact-checkers may be tempted to use such systems because they fail to logically analyze what they are checking into distinct factual claims before rating them. If, for instance, someone asserts a logical conjunction of two claims, one of which is true and the other false, it may be tempting to rate the claim "half true". For instance, if I claim that it is raining and the sun is shining, but as a matter of fact it is raining and overcast. We know from the truth conditions of conjunctions that the statement as a whole is false when one conjunct is false. So, either the conjunctive statement should be separated into two distinct claims, each to be rated on its own, or it should be rated "false" rather than "half true".
A factual claim of the type that can be checked at all is either true or false, and never both. So-called half-truths are not half true, but wholly true. This doesn't mean that half-truths are not misleading; in fact, they can be more misleading than whole lies. The very fact that they are true, but not the whole truth, may mislead even more effectively than a lie would. A true claim isn't the whole truth because it leaves out important context, and it's a perfectly fine public service for fact-checkers to supply such missing context, but it shouldn't be reflected in the ratings system, except in a rating such as: "True, but missing important context".
Fact checkers may also be tempted to use degrees of truth to rate vague claims. For instance, if Barry is balding we may be tempted to say that the claim that he is bald is "half true"―or is it half false?―but if Barry really is in the twilight zone between bald and not bald, then the claim is neither true nor false. It's the nature of vague words, such as "bald", that their meaning is not sufficiently fixed to answer certain questions, such as: "Is Barry bald?" Later in this entry, we'll see an example of how it can be possible to check claims for truth and falsity even when they contain vague language. However, if a claim is so vague that it's not clear whether it's true or false, it's best to either ignore it, or point out that it's too vague to rate and explain why.
These rating systems are an open invitation to bias in rating factual claims. A fact-checker rating a false claim made by someone whose politics the checker agrees with is likely to downgrade the rating from False to Mostly False, or even Half True, or to give it less than four Pinocchios, let alone "Pants on Fire!". Similarly, a checker rating a true claim made by someone whose politics the checker dislikes is likely to find some excuse to downgrade it to Half True, or even to Mostly False. Let's look at an example of this process.
In the previous entry4, I used an example from PolitiFact, though I'm sure that I could have found just as good an example, and possibly an even better one, elsewhere. For this entry, I intentionally avoided PolitiFact so as not to pick on it. Instead, the example I have chosen is from The Washington Post's Fact Checker column5.
Earlier this year, the District Attorney of San Francisco, Chesa Boudin, made the following claims during a television interview:
Like the majority of Americans, I grew up with an immediate family member incarcerated. The majority of Americans have an immediate family member who is either currently or formerly incarcerated, so I have that in common with the vast majority of people in this country.6
The interviewer did not call into question these rather alarming claims, or even ask where Boudin got them. Of course, it's hard to fact check statistical claims in the middle of an interview*. Moreover, I haven't seen any sign that the program did any follow-up reporting itself. One public service that fact checkers can provide is to check such claims when the news media fail to do their jobs. It appears that a skeptical viewer's question initiated The Post's fact check.
The claim that the majority of Americans have an immediate family member incarcerated either now or in the past is a surprising one, let alone that the vast majority do. It's in just such moments of surprise that one's skeptical immune system should be engaged. The reason why you are surprised at a claim is that it goes against your own experience or knowledge. Of course, your own experience is limited, and what you think you know may be wrong. Some surprising things turn out to be true, but many turn out to be false, and the purpose of fact checking is to separate the two.
As mentioned above, an often neglected step in fact checking is logically analyzing complex factual claims into their true-or-false components. In this case, Boudin made three distinct factual claims:
- He himself had an immediate family member incarcerated.
- The majority of Americans have an immediate family member who is either currently or was formerly incarcerated.
- The vast majority of Americans have an immediate family member who is either currently or was formerly incarcerated.
I don't think there's any doubt that the first claim is true, so we won't waste any further time on it7. Rather, it's the second and third claims that call for checking. Though obviously logically similar, those two claims are distinct. The word "vast" is the only difference between the two sentences, but there is a difference, if not a vast one, between a majority and a vast majority.
The second claim is true if, and only if, greater than half of Americans have an immediate family member who is either currently or was formerly incarcerated. So, if just over 50% of the American population fits the bill, then the claim is true.
The third statement makes a logically stronger claim, that is, if it is true then the second claim will also be true, but not the other way around. In other words, a "vast" majority is a majority, but not every majority is vast. For this reason, Boudin's second claim could be true while his third one was false.
There are two vague words in these claims:
- "Immediate": What exactly is an "immediate" family member? Presumably, parents, children, and siblings would count. What about grandparents or grandchildren? Do the family members have to live together to be "immediate"?
- "Vast": The "vast" majority of some class is obviously greater than a simple majority, but how much greater is unclear. Is 55% a "vast" majority? How about 60%? No doubt 95% would be a vast majority, but what about 90%? 85%?
So, both of the claims are vague, but that doesn't automatically rule out rating them as definitely true or false. While vague terms have borderline cases―such as the aforementioned Barry―they also have clear-cut cases, such as Barry's hairy brother, Harry, who has a luxurious head of hair. Though "Barry is bald" is neither true nor false, "Harry is bald" is clearly false. So far, we don't know whether the two claims in question are like "Barry is bald" or "Harry is bald", so we can't stop here.
Now that we've laid the logical foundation for checking these claims, let's proceed to examine The Post's fact check. How can these two claims be checked? Obviously, we need to know what percentage of the American population has had an immediate family member incarcerated. Unfortunately, there don't seem to be any official statistics that could answer this question. Instead, the only way to do so is to look at surveys that ask people whether they have had an immediate family member in jail or prison. The Fact Checker mentions two such surveys:
- The FWD.us Survey: This survey was funded by FWD.us, a political group that started out campaigning for immigration "reform"8, but has since added incarceration "reform" to its causes. It also supports the silly "people first language" doublespeak project9, but let's not hold that against the survey.
Here's the exact wording of the question asked by the survey:
Many people have been held in jail or prison for a night or more at some point in their lives. Please think about your immediate family, including parents; brothers; sisters; children; and your current spouse, current romantic partner, or anyone else you have had a child with. Please include step, foster, and adoptive family members. Confidentially and for statistical purposes only, have any members of your immediate family, NOT including yourself, ever been held in jail or prison for one night or longer?10
According to the report: "The data show that 45 percent of Americans have ever had an immediate family member incarcerated.11"
- The CNN/KFF Survey: This survey was paid for and conducted by Cable News Network and the Kaiser Family Foundation, a non-profit organization primarily focused on national health care12. In a survey on race, the following question was asked: "Have you or any of your family members or CLOSE friends ever been incarcerated, or not?13" 39% of respondents answered "yes".
The Fact Checker claims that these two surveys are incomparable because of the difference in question wording, but that's incorrect. The CNN/KFF question is logically broader than that asked by the FWD.us. CNN/KFF asked about all family members, not restricting it to immediate ones, and included close friends. Since those who have had immediate family members incarcerated are included in those who have had family members or close friends incarcerated, the latter set is larger than the former. Instead, we see the opposite result in the surveys: 39% answered the broader question affirmatively as contrasted with 45% answering the narrower one "yes". This is inconsistent, which means that at least one of these survey results must be wrong.
Now, I don't know which survey is wrong, but it isn't necessary to decide. Despite their incompatible results, the two surveys agree on one thing: less than 50% of Americans have had an immediate family member incarcerated. This means that both of Boudin's claims are false.
Apparently, the Fact Checker asked Boudin himself or a spokesperson for the source of his statistics and was informed that they were based on the FWD.us survey5. Then, the checker proceeded to check the claims against the study's findings as though the question were whether Boudin reported the paper's claims correctly, though what the checker was supposed to check was whether those claims were true or false.
Relying on the FWD.us survey, the checker concluded: "The overall rate of Americans who have had an immediate family member behind bars, 45 percent, is remarkably high but not quite a 'majority' and far from a 'vast majority.'5" So, both of Boudin's claims were false, according to the very survey that his spokesperson cited.
Despite this finding, the Fact Checker spends the remainder of the column making excuses for Boudin and ends up awarding him a single Pinocchio, which is the lowest "false" rating available. Both of Boudin's claims were false, no matter which survey you consider, and his claim that the "vast majority" of Americans had had immediate family incarcerated was outrageously false. Yet, here's the Fact Checker rating and its explanation:
The Pinocchio TestBoudin said the "majority" or "vast majority" of Americans currently have or previously had an "immediate family member" behind bars. The study he was citing backs him up to an extent―it found the rate was 45 percent overall, and 63 percent for Black Americans―but it's just shy of a majority of Americans. However, the researchers also asked about extended "family members you feel close with." When including those relatives, 64 percent of Americans, or nearly two-thirds, have had family in jail or prison. It's always a good idea for policymakers to read the underlying research, so errors like this can be avoided. For a light stretching of the facts, Boudin gets One Pinocchio.5
This is the sort of excuse-making you would expect to hear from a spokesperson for Boudin, not from an independent, objective fact-checker. Is 45% "just shy" of a vast majority? Did Boudin claim that the "vast majority" of Black Americans had had an immediate or extended family member incarcerated? No.
Here's the Fact Checker's description of what "One Pinocchio" is supposed to mean:
Some shading of the facts. Selective telling of the truth. Some omissions and exaggerations, but no outright falsehoods.3
What is an "outright falsehood" if these claims are not outright false? One meaning of "outright" that may apply here is "completely" or "totally"14, which brings us back to where we started: degrees of truth and falsity. Apparently, Boudin's claim that the vast majority of Americans have had immediate family in jail or prison was not false enough for the Fact Checker. What would it take to make it outright false: no Americans having immediate family behind bars? That would be absurd.
Both of the controversial claims made by Boudin are outright falsehoods, and one outrageously so, even based on the survey that was supposed to justify them. Using the Fact Checker's own criteria, these claims deserved at least three Pinocchios for "significant factual error and/or obvious contradictions.3" Of course, it would be better to drop the Pinocchios and simply label them both "false", tout court.
My criticism here is entirely of the fact check's rating and its explanation, not of the research that went into the body of the article. By presenting the facts, the article allows readers to come to their own conclusions about Boudin's claims, and judge for themselves whether "one Pinocchio" is a reasonable rating. This is a genuine public service that I don't mean to denigrate.
That said, fact checks such as this give fact checkers a reputation for bias. One thing that The Washington Post and other checkers should do to restore their reputations is stop using such rating scales. Instead of Pinocchios and Pants-on-fires, they should either switch to simple true and false or drop the ratings entirely and simply present the facts as Annenberg does, then let the reader be the judge.
*Correction (10/18/2021): Originally, I wrote: "Of course, it's hard to fact check statistical claims in the middle of an interview, especially when it's a friendly, 'softball' interview such as this with no significant skepticism expressed about anything Boudin said." I based this claim on the latter part of the interview in which the incarceration claims occurred, but the interviewer did express some skepticism about Boudin's claims about crime in San Francisco early in the interview, so I shouldn't have characterized it as "friendly" or "softball". My apologies to the interviewer and to readers for the mischaracterization.
Reader Response (10/18/2021): David Hawkins raises an important issue:
In this post you criticize the fact checkers for treating truth and falsity by degrees. "In this entry, I examine a different problem with such systems, namely, that they treat truth and falsity as if they come in degrees." However, further down in the post the grievance seems to be repeated. Both of the controversial claims made by Boudin are outright falsehoods, and one outrageously so. The implication here seems to be that the outrageously false claim is even more false than the outright false claim.I also recall from the "Fact-Checkers ≠ Lie-Detectors" post that the subjectivity issues taken with the term "ridiculous" that could be applied here to the term outrageous. What is the evidence that the second claim itself was outrageously false? Is 1 + 1 = 11 a ridiculous or outrageous falsehood relative to the outright falsehood of 1 + 1 = 3. To your point, both of these answers on a math quiz would be marked incorrect, and qualifying the truth or falsehood of a statement in any way seems to be judging them by degrees, this includes the various fact-checking ratings systems, ridiculous, outrageous, and reasonable.
I certainly didn't intend that interpretation of the word "outrageously". The meaning of the word I had in mind was "in a shocking way15". I don't think that an outrageously false claim is somehow more false than a more plausible one, just that it's more obviously false. The same is true of "ridiculous". You're right that such judgments are subjective, and what's obvious to me may not be obvious to you, or to the producers and reporters of the television show where Boudin made the false claims.
It's a bad idea to build such subjective judgments into the ratings system, if that's what the fact checkers are doing. One of the criticisms made of them is that they are just pundits masquerading as objective reporters16, and to the extent that they are building such subjective judgments as "ridiculous" and "outright" into their ratings, the criticism is correct.
I'm also a little shocked that Boudin's claims went without challenge, that no follow-up was done, and that the show broadcast such falsehoods and never corrected them. This is one reason we have a pandemic of misinformation today. There's been so much of this in recent years that I should no longer be shocked and outraged by it, but I still am.
I mentioned in the entry, above, the need of fact checkers and reporters for a "skeptical immune system" that will raise the alarm when confronted with a claim that is "outrageous" or "ridiculous". Of course, any warning system will have false positives, but a well-calibrated skeptical immune system will prevent most misinformation from entering your brain and taking up permanent residence. Given the epidemic of misinformation, and the failures of those who are supposed to prevent it but instead spread it themselves, we all need such a system.
Notes:
- A happy exception is the Annenberg fact-checking project, which is one reason why I consider it the best fact checker; see: "Our Process", Fact Check, 8/12/2020.
- Angie Drobnic Holan, "How we determine Truth-O-Meter ratings", PolitiFact, 10/27/2020.
- "About the Fact Checker column", Glenn Kessler, accessed: 10/16/2021.
- This is the second entry in the fact-checking series on what is wrong with professional fact-checking as it is now practiced. For the previous entry, see: Fact-Checkers ≠ Lie-Detectors, 8/27/2021.
- Salvador Rizzo, "San Francisco DA claims 'vast majority' of Americans have had family behind bars", The Washington Post, 7/30/2021.
- For video of the interview, see: "San Francisco's Polarizing District Attorney: 'I Refuse to be Distracted'", Amanpour & Co., 7/28/2021. The quote comes at about 15:00. The transcription and punctuation are taken from the Fact Check; see the previous note.
- See the short biography included in the The Post's fact check; note 5, above.
- Rachael Bale, "What is Mark Zuckerberg's Fwd.us?", KQED, 4/11/2013.
- See: "Why People First?", FWD.us, accessed: 10/7/2021. Also, see: Close Encounters with Doublespeak of the Third Kind, 9/8/2019.
- Peter K. Enns, et al., "What Percentage of Americans Have Ever Had a Family Member Incarcerated?: Evidence from the Family History of Incarceration Survey (FamHIS)", Socius: Sociological Research for a Dynamic World, 3/4/2019, under "Measuring Family Incarceration". Paragraphing suppressed; all-capitals in the original.
- Ibid., under "Abstract".
- "CNN/Kaiser Family Foundation Survey of Americans on Race"
, Kaiser Family Foundation, 11/2015. - Ibid., p. 33, question D12; all-capitals in the original.
- "Outright", Cambridge Dictionary, accessed: 10/16/2021.
- "Outrageously", Cambridge Dictionary, accessed: 10/18/2021.
- For an example, see: Mau-Mauing the Fact Checkers, 10/27/2008. I've changed my opinion since I wrote this.
![]()
Looking for a cheap research paper? CopyCrafter.net offers high-quality service with 24/7 support.
Superb online gambling Australia at Casinonic! Register and claim bonuses now!
If you're looking for an extensive review of the best sports betting games in the Philippines today, check out 22betphilippines.com.ph complete with thorough game reviews, attractive bonus offers, and responsive customer support.
Get the top-notch online casino entertainment available in the Philippines when you visit onlinegambling.com.ph
Online cricket betting is the most played sports betting game in India. If you want to play and win awesome offers on cricket games just check out topcricketbetting.in.
Read and write reviews of Casino Gods at TheCasinoDB.