Let the Children Play
& Was Naomi Wolf Always Full of It?
- David Wallace-Wells, "The Kids Are Alright", New York Magazine, 7/12/2021.
The kids are safe. They always have been. It may sound strange, given a year of panic over school closures and reopenings, a year of masking toddlers and closing playgrounds and huddling in pandemic pods, that, according to the CDC, among children the mortality risk from COVID-19 is actually lower than from the flu. The risk of severe disease or hospitalization is about the same. … Most remarkably, it has been known to be true since the very earliest days of the pandemic―indeed it was among the very first things we did know about the disease. The preliminary mortality data from China was very clear: To children, COVID-19 represented only a vanishingly tiny threat of death, hospitalization, or severe disease. Yet for a year and a half we have been largely unwilling to fully believe it.
I don't think most Americans were "unwilling to fully believe it". I certainly wasn't unwilling to believe it, as I knew it over a year ago having read the relevant statistics1. Clearly, Wallace-Wells wasn't unwilling to believe it.
There is much excellent and apparently little-known information in this article, but there's one thing that bothers me about it: the author consistently and obtusely writes throughout of "we", "us", and "our". This is an annoying rhetorical tick, or perhaps I should say "trick". It starts in the last quoted sentence, above.
Who is this mysterious "we"? I'm going to treat this article as a whodunnit. In my comments below, I will point out the clues, and the guilty party will be revealed at the end.
Children now wear masks at little-league games, and at the swimming pool, and when school reopens in the fall they will likely wear masks there, too. But the kids are not at risk themselves, and never were. Now, thanks to vaccines, the vast majority of their parents and grandparents aren't any longer, either. …
Over the course of the pandemic, 49,000 Americans under the age of 18 have died of all causes, according to the CDC. Only 331 of those deaths have been from COVID―less than half as many as have died of pneumonia. … All told, 600,000 Americans have lost their lives to COVID over the course of the pandemic; just 0.05 percent of those were under the age of 18, a population that represents more than 20 percent of the country's population as a whole. …
…[I]n the depths of a pandemic as we were [last summer], individuals are not just individuals but links in a chain of transmission, which is the main reason why, for much of the last 18 months, public-health officials have worried over infections in the young―assuming they would eventually help bring the disease back to those much more vulnerable.
…We have treated the disease almost as a uniform threat as a way of encouraging uniform vigilance. The best way to stop deaths was to stop cases, went the thinking, which dovetailed naturally with every parent's intuitive caution and desire to keep their kids healthy and uninfected―and distrust, perhaps, of anyone who suggested that your child would be fine if she got sick. But whatever we told ourselves in doing so, we didn't pull those kids out of school and put them in masks for their own [sake]. We did it for the sake of others.
There's that annoying "we", again. Who treated the disease that way? This use of a non-royal "we" is a weaselly way of avoiding laying the blame where it belongs.
A survey indicated that Americans believed that the median age of those who died of COVID-19 was 55, meaning that half of those who died were below that age and half above it2. So, this is evidence that Americans actually believed the disease was a "uniform threat", unless you suppose that they intentionally lied to the survey takers. In contrast, here's the truth about "…the still under-appreciated fact of the age skew of COVID-19―even by those who know, vaguely, that the older are more vulnerable":
The important question is: How much more vulnerable? According to the CDC, the mortality risk for those 85 and above is 610 times higher than for 18-29 year olds. The number is so large it is almost hard to process. If a given number of infections among 20-somethings would produce just a single fatality, the same number of infections in 85-year-olds would produce 610. Of all the risk factors and comorbidities we read and heard so much about last spring, from race and class to obesity and COPD, [politically-correct throat-clearing omitted] the effect of age absolutely dwarfs all of them. Somehow, we could barely hear that alarm bell in the panicked pandemic din.
Again, it is supposedly "we" who couldn't hear the alarm bell, but why didn't those charged with ringing it ring it louder and longer, making sure that we did hear it?
…[M]ass vaccination in the United States has utterly changed the landscape of the pandemic: not only by protecting those who have received shots, indeed astonishingly well, but by changing the calculus for all those who haven't, by eliminating almost all of the mortality risk of the population at large. All told, 80 percent of American deaths have been among those 65 and above. According to the White House, 90 percent of American seniors are now fully vaccinated. Which means that while more cases are likely and some amount of hospitalization and death, as well, vaccines have eliminated the overwhelming share of American mortality risk, with the disease now circulating almost exclusively among people who can endure it much, much better―kids especially. The country's whole risk profile has changed. But our intuitions about risk tolerance haven't―at least not yet.
Even if people intuitively assume that diseases are equally dangerous to all ages, why didn't the public health authorities and the news media explain that this isn't true of COVID-19? Wallace-Wells writes as though the problem was that "we" were unwilling to accept the fact that most of the risk of COVID-19 was for the already sickly elderly. While there were some who pointed this out last year―including Wallace-Wells, to his credit―surveys show that people were misinformed3, despite the continuous coverage of the epidemic that drove much other news off the front page.
I've omitted a couple of paragraphs of additional statistics on the age-related effects of COVID-19; if you don't know this information, read the whole thing.
It was often said, in lamentations of American indifference at the outset of the pandemic, that the country would have taken the disease much more seriously if it hadn't spared the very young. In the year that followed, we mostly pretended it didn't.
Who is it that "pretended"? Let's not now pretend that we're all equally guilty.
Did you know that the WHO doesn't even recommend universal mask-wearing for kids younger than 12? None of this is new, and, scientifically, none of it was ever disputed, not even during the bitterest and most intense of last year's fights over pandemic policy. Nobody was debating the risk of severe disease in children―in fact whenever a Republican governor, speaking of school kids, made the comparison to the flu, media organizations would dispatch fact-checkers who invariably returned a verdict of "mostly true." What scientists were debating instead was transmission rates―whether children could catch the disease, or spread it, as readily as adults, especially in the classroom settings that became the focal point of the fight. …
But in my view, the basic disregard for the age skew of the disease looks in retrospect like the bigger oversight, in part because there was no scientific dispute. And still, painfully little was done to address it in policy. "Shouldn't we have been celebrating the fact that it doesn't affect children that much?" [Monica] Gandhi asked me. "Like, shouldn't that be something that we celebrate? I mean, it is kind of weird. You just have to look at the CDC websites to see that kids are not very much at risk."
I first wrote about the subject early last May, in an essay with the headline "COVID-19 Targets the Elderly. Why Don't Our Prevention Efforts?"4 At the time, I was told, by many people who'd know better than me, that the country simply lacked the capacity to meaningfully protect the elderly during the first spring wave….
For a while, at the beginning of the pandemic, the age skew of the disease was treated as a form of COVID-denier, right-wing propaganda―as though the inevitable implication was indifference towards deaths among the very old. But only a sociopath would draw that conclusion, as opposed to its opposite: that a portion of the American public desperately needed support and protection. By and large, they didn't get it. …
So what does this mean for the remainder of the pandemic? First, we should do what we can to actually, finally, process the basic, astounding fact of the pandemic age skew―to try to put aside our reflex to shield children from any threat of infection, to put aside the additional fear we've all felt, all year, because of the simple novelty of this disease, and to instead endeavor to see clearly the real scale of the direct threat to kids, which is and always has been minimal.
Again, Wallace-Wells writes as though this is a failure of all of us to "process" this fact. The political and public health authorities acted as though they failed to process it by keeping schools closed, forcing children to try to learn from home, and requiring them to wear masks. Some are still talking and acting this way in defiance of the scientific evidence. They are the real "COVID-deniers" if anyone is, and they are also de facto denying the efficacy of vaccines. It's unfair to blame the public when those who should know better act as though they don't know better. Wallace-Wells does this despite the fact that, in his excellent article early last year, he wrote:
…[O]ne observation from the early days of the pandemic has been confirmed again and again, in country after country: The lethality of the virus rises sharply with age. In the United States, we have spent much of the last few months enacting and debating uniform, universal public-health measures, which treat each citizen equally for the purposes of applied policy…. Our policy, by and large, has treated every person as equally at risk, but the disease doesn't treat us all equally. As we've known nearly from the start of this pandemic, but have chosen to downplay in our public messaging and public policy, COVID-19 is brutally lethal for the elderly, considerably less so for the middle-aged, and still less so for the young. The disease discriminates by age, in other words…. (Emphasis added.)
Who is the "we" here? Who chose to "downplay" it in "public messaging and public policy"? It wasn't the public itself; it was the public that was misled by the downplaying. Instead, it was those responsible for public messaging and policy, namely, politicians, health authorities, and the major news media.
- Liza Featherstone, "The Madness of Naomi Wolf", The New Republic, 6/10/2021. WARNING: Contains an undeleted expletive.
Several paragraphs of throat-clearing omitted.
[Naomi] Wolf has tweeted that she overheard an Apple employee (who had attended a "top secret demo") describing vaccine technology that can enable time travel. She has posited that vaccinated people's urine and feces should be separated in our sewage system until their contaminating effect on our drinking water has been studied. She fears that while pro-vaccine propaganda has emphasized the danger the unvaccinated pose to the vaccinated, we have overlooked how toxic the vaccinated might be. And as the journalist Eoin Higgins reports, she is headlining an anti-vaccination "Juneteenth" event this month in upstate New York. (Yes, the organizers chose that date to suggest that vaccines are slavery.) … When a public intellectual declines this far, we need to ask: Was she always full of [expletive deleted]?
Yes, she was5.
Revisiting The Beauty Myth, I found it beautifully written, accessible, and righteous. I also found it daft.
One of the elements of the book I remember as most persuasive was all the statistics. It turns out, however, that they're highly questionable. To take just one instance, Wolf gives the reader the impression that eating disorders are an existential threat to the female human. Twice within two pages, she says such disorders have increased "exponentially," but a 2012 review of historical epidemiological data since 1930 found no such thing. A 2004 academic paper demonstrated that more than two-thirds of Wolf's anorexia stats were wrong; the author coined the acronym WOLF to describe her approach: "Wolf's Overdo and Lie Factor." Citing the 2004 paper at The New York Times, Parul Sehgal singled out one harrowing example: Where Wolf placed deaths from eating disorders at 150,000 annually, the actual number at the time was closer to 50 or 60.
It didn't take until 2004 for the absurdity of this statistic to be noticed. Wolf's book was published in 1991, and Christina Hoff Sommers debunked the statistic three years later―a decade prior to 2004―in the preface to her book Who Stole Feminism?6
For comparison, 150,000 was the number of coronavirus deaths in the United States that had occurred by the end of July 2020; 38,000 is the number of car accident deaths annually in the U.S. … It's not hard to say, of course, that even 50 deaths from anorexia are too many. Yet at the best, most respected moment of her career, Wolf was reporting on a genocide that never occurred…. Rather than be shocked at how far afield from reality Wolf has wandered, it's probably time to admit she's always been wrong.
Way past time. It was obvious decades ago that she didn't know what she was talking about.
Wolf's views on vaccines led Twitter to ban her from the platform for peddling misinformation. I don't think anyone should be banned from Twitter for this reason: What counts as "fake news" can be a matter on which reasonable people may disagree, and I'm sure Twitter's view of the world doesn't much resemble mine, either. …
I agree that Nitwitter, and other such platforms, should stay out of the censorship business. The proper way to deal with the Wolfs of this world is to debunk them, not to turn them into martyrs through censorship.
The following lengthy article is well worth reading in full. I've excerpted the most important points below, including the one that I take issue with, but I've had to leave out most of the article.
- I first mentioned the minimal risk to children at the end of the following entry: Mayday! Mayday!, 5/1/2020. If I had realized a year ago that people would still not understand this simple fact, I would have put more emphasis on it.
- See: Sonal Desai, "On My Mind: They Blinded Us From Science", Franklin Templeton, 7/29/2020.
- For previous entries on public ignorance and misunderstanding of the risks of COVID-19, see:
- See: David Wallace-Wells, "COVID-19 Targets the Elderly. Why Don't Our Prevention Efforts?", New York Magazine, 5/13/2020.
- Christina Hoff Sommers, Who Stole Feminism? How Women Have Betrayed Women (1994), pp. 11-12. I discussed this statistic previously here: Be your own fact checker!, 2/15/2012.
Disclaimer: I don't necessarily agree with everything in these articles, but I think they're worth reading as a whole. In abridging them, I have sometimes changed the paragraphing and rearranged the order of the excerpts in order to emphasize points.
Honey, I shrunk my brain!
Most medical research about coffee and health suggests that drinking coffee is good for you, and can even lengthen your life1. Unfortunately, almost all such research is observational rather than experimental, which means that it cannot establish causation. Usually, the researchers simply compare coffee drinkers and abstainers. If people who drink coffee live longer on average than those who don't drink it, or drink less of it, then the study shows an association between coffee drinking and longer life.
However, such a relationship is not necessarily causal: it doesn't show that drinking coffee causes longer life. Any two groups of people differ in many ways, and it may be some other difference between the two groups that accounts for the association. For instance, it may be that some chronically ill people avoid coffee and also tend to die younger than healthy coffee-drinkers.
Here's a recent example of the occasional headline-grabbing study that goes in the opposite direction:
Too much coffee can cause your brain to shrink, raises risk of dementia, study finds2
Fortunately, this study is also observational, so it can't establish what the headline claims, namely, that coffee can cause the brain to shrink. Moreover, the study didn't show that anyone's brain actually shrank. The researchers compared groups of people based on how much coffee they reported drinking, and those who drank six or more cups a day had smaller brain volumes than those who drank less. So, for all that we can tell from this study, those who drank more coffee simply had lower-volume brains all along. Perhaps there's something about having a lower-volume brain that leads to higher-volume coffee intake.
In any case, the researchers and the author of the press release3 for the study were careful not to claim anything more than an association between higher coffee consumption and lower brain volume. Also, there's nothing about shrinkage, significant or not, in the paper's abstract4 or the release. The brain shrinkage claim comes from news articles that otherwise just rewrite the press release.
Sadly, this is a typical example of most health and science reporting nowadays. In order not to be misled, the reader must disregard the tabloid-style headline, then read between the lines of the underlying article in order to find out what the study reported actually found.
Update (8/6/2021): Research Check published an article analyzing this study5 that appeared after mine, above. It's a careful and thorough job, and even peer-reviewed! You can check my work by comparing the two.
- "Too much coffee can cause your brain to shrink, raises risk of dementia, study finds", Study Finds, 7/24/2021.
- "Excess coffee: A bitter brew for brain health", University of South Australia, 7/22/2021.
- Kitty Pham, et al., "High coffee consumption, brain volume and risk of dementia and stroke", Taylor & Francis Online, 7/24/2021. The abstract of the study.
- Lachlan Van Schaik, "Could drinking 6 cups of coffee a day shrink your brain and increase dementia risk?", Research Check, 8/3/2021.
Disclaimer: I am not a physician, nor do I play one on television. I do drink coffee, but not over six cups a day. The above entry is offered for information and entertainment purposes only, and not intended as medical advice. If you experience significant brain shrinkage, see your personal physician immediately.
Errors of Unusual Magnitude
As I mentioned earlier this year1, the American Association for Public Opinion Research (AAPOR) created a "task force" to study the performance of last year's general election polls2. Its report is now out3 but I haven't had a chance to read it yet.
We already know that last year's polls were bad4, but just how bad were they? One thing that the new report did is to quantify how poorly the polls performed. The Washington Post's Dan Balz reports: "Public opinion polls in the 2020 presidential election suffered from errors of 'unusual magnitude,' the highest in 40 years for surveys estimating the national popular vote and in at least 20 years for state-level polls, according to a study conducted by [AAPOR].5"
I've mentioned in the past that very many polls are taken in a presidential election year, but I've never known just how many. Now, the AAPOR report tells us that the group examined 529 presidential polls from last year. I bring up the large number of polls not just because I think it's an absurd waste of time, effort, and money―though I do―but because given the usual confidence level of such polls, we can expect that 5% of them will be wrong by greater than the usual margin of error6. So, we could expect around 26 of last year's polls to be off by more than three percentage points.
Also, it was clear that the polls over-estimated support for Democratic candidates, but by how much? Based on the Real Clear Politics average of national polls, I calculated last year that they showed Biden winning by a margin of 4.3 percentage points of the popular vote greater than he won. According to the AAPOR report, this margin was only 3.9 points, so my calculation was not too far off.
According to The Post's report, the AAPOR report seems to have ruled out most explanations of the large error except:
One possible explanation is that Republicans who responded to surveys voted differently than Republicans who choose not to respond to pollsters. The task force said this was a reasonable assumption, given declining trust in institutions generally and Trump's repeated characterizations of most polls by mainstream news organizations as fake or faulty. "That the polls overstated Biden's support more in Whiter, more rural, and less densely populated states is suggestive (but not conclusive) that the polling error resulted from too few Trump supporters responding to polls," the report states. "A larger polling error was found in states with more Trump supporters."
The report makes an excellent point which has been an ongoing theme of these "Poll Watch" entries:
The report emphasizes that though often quite accurate, polls are not as precise as sometimes assumed and therefore given to misinterpretation, especially in the most competitive races. "Most pre-election polls lack the precision necessary to predict the outcome of semi-close contests," the report states. "Despite the desire to use polls to determine results in a close race, the precision of polls is often far less than the precision that is assumed by poll consumers."
When I've had a chance to read the whole report I'll update this entry or write a new one if I discover anything else in it worth writing about.
- What biased last year's polls?, 4/27/2021.
- "AAPOR Convenes Task Force to Formally Examine Polling Performance During 2020 Presidential Election", American Association for Public Opinion Research, 2/13/2020.
- Josh Clinton, et al., "Task Force on 2020 Pre-Election Polling: An Evaluation of the 2020 General Election Polls", American Association for Public Opinion Research, 7/19/2021.
- Post Mortem, 11/11/2020.
- Dan Balz, "2020 presidential polls suffered worst performance in decades, report says", The Washington Post, 7/18/2021. Subsequent quotes from this article; paragraphing suppressed.
- For the confidence level and margin of error of polls, see my: How to Read a Poll.
There ain't no such thing as free knowledge.
Quote: "Free knowledge from an encyclopedia―that would be a glorious thing. It is a shame that it is impossible. Knowledge is something that exists in minds, not texts. Reading a text will give you some ground for belief; it will not, by itself, actually give you knowledge. Still, we can speak loosely and say that encyclopedias contain what purports to be knowledge, and that is enough for me to love encyclopedias."1
Title: Essays on Free Knowledge
Comment: As the title of this newish book indicates, this is a collection of essays. I've already read some of them but am interested in reading the remainder.
Subtitle: The Origins of Wikipedia and the New Politics of Knowledge
Comment: I haven't written much about Wikipedia in recent years, but I've been a frequent critic of it on this weblog for over a decade. When it first began, I had some hope that it might turn out well, but was always skeptical of the approach taken to constructing it. Unfortunately, my skepticism appears to have been borne out by its subsequent development. I'll get into what's wrong with the approach later.
Author: Larry Sanger
Comment: Sanger is a philosopher and co-founder of Wikipedia, so he knows where the bodies are buried.
Summary: According to the book's table of contents, like all of Gaul, it is divided into three parts:
- The history and theory of Wikipedia
These are the questions that Sanger addresses in this part:
What makes an open, online collaboration succeed? … Should media, textbooks, and above all reference works aim to be neutral―or should they instead aim at what their editors claim is the objective truth? How should we organize people who are difficult to reconcile, who have different interests and agendas? How do we resolve disputes among anonymous people in open communities?2
Comment: I don't know the answers to any of these questions except the second: assuming that "neutrality" is not just another name for "objectivity", I think that reference works at least should aim for the truth. As far as I'm concerned, all truth is objective; the phrase "subjective truth" is just a fancy way of referring to an opinion or mere belief.
The first essay in this section and, thus, in the book, is one that I've already read: "The Early History of Nupedia and Wikipedia: A Memoir". This essay was written and published in 2005, when Wikipedia was still young, and its tone is more positive than Sanger's more recent writings. Perhaps this is because he was still too close to it to view it objectively, or time has not been kind to it. In a footnote to this essay added to the book, he writes: "By 2019, I had come to the view that Wikipedia is simply 'broken'"3. That doesn't say whether it took until two years ago for Sanger to realize that it was "broken", or that it wasn't broken until then.
I don't think that its early history is important to understanding what's "broken" about Wikipedia, nor does it show us how to fix it. It shows how we got here, but it doesn't show how to get out of here. For that reason, unless you're specifically interested in its history, I would suggest skipping over this essay, which is not to say that this detailed account isn't interesting.
The last essay in this section, "Why Wikipedia Must Jettison Its Anti-Elitism" is another that I've already read. It deals with the issue that I'm most interested, namely, expertise. Many of Wikipedia's supporters seem to believe that you don't need experts to produce an encyclopedia: that if you get enough ignoramuses together, they will produce knowledge. This sounds rather like the old probability theory chestnut that if you get enough monkeys typing away, eventually they'll produce the 11th edition of the Encyclopaedia Britannica. That's a sarcastic way of putting it, but the question is serious: how do you get knowledge out of ignorance?
- The politics of internet knowledge
In an age of instant answers from collectively-built databases, should we care about accumulating individual knowledge, or are mere information and collective knowledge good enough? What sort of special role, if any, do experts deserve in declaring "what we all know"? Is individual knowledge, built from books and individual study, somehow outmoded?4
Comment: To address the last question first: I don't see how "collective knowledge" can be constructed without individual knowledge from which to construct it. By an "expert", all I mean is a knowledgeable person, and not necessarily someone with a particular degree. If you know every Pokémon character and its abilities, then you're a Pokémon expert.
The answer to the last question answers the first. As for the second, I'm not sure what it's asking.
- Freer knowledge
In the final part I include three recent essays bemoaning the fact that free knowledge is in dire straits, now that, like social media, Wikipedia has abandoned neutrality and is used as a tool for social manipulation. … I conclude, in a brand new essay, that free information and knowledge on the Internet is under attack, and I ask how we can save it.5
Comment: Apparently, this section is at least partly promoting Sanger's new project: the Encyclosphere. I gather that the goal is to create an alternative internet encyclopedia that lacks the faults of Wikipedia. I wish the project well and would love to have a superior alternative to Wikipedia, which I would gladly use6, but I doubt it will work out. The problem is that Wikipedia has already grown so big that it's sucked all of the air out of the online encyclopedia reading room. Like such platforms as YouTube, Twitter, and Facebook, there's really only room for one such entity on the internet, and for better or worse we're probably stuck with it. I hope I'm wrong about that.
General Comment: As mentioned above, I was somewhat skeptical of Wikipedia as soon as I heard of it, but was willing to give it a chance. My skepticism soon turned into criticism as I began reading it, especially in areas in which I'm an expert.
How can you test a reference work or other source of information for reliability? Choose a topic about which you are already knowledgeable, preferably to the level of expertise, then see what the source has to say about it. So, I read Wikipedia entries on logic, including those on logical fallacies. Not only were some of these entries inaccurate, some didn't even make sense. In addition, many defenders of Wikipedia claim that errors are quickly corrected, whereas some of the mistakes that I noticed persisted for years or are still present7.
Sanger seems to have followed much the same path as I took, though more slowly. However, I get the impression from his more recent essays that he's now passed me in his skepticism, though that may be because I simply haven't paid much attention recently.
Finally, some comments about Sanger's writing: he's a philosopher who doesn't write like a philosopher writing for other philosophers. So, he spends little time referring to the views of famous philosophers or citing the philosophical literature in a way designed to impress other philosophers. This is not to say that his views on knowledge and expertise have been dumbed down, but that he writes about these topics so clearly that I think any intelligent person will be able to understand him.
Publication Date: 2020
Comment: This book is from last year, obviously, but I only found out about it this year.
The Blurbs: There are few blurbs for this book, and those it has are mainly descriptive of who Sanger is.
Disclosure & Disclaimer: I've never met Sanger in the flesh, but belonged to an email discussion list on systematic philosophy he ran back in the '90s. This is a newish book and I haven't read it yet, so can't review or recommend it, but its topic interests me and may also interest Fallacy Files readers.
- P. ix; all page number citations are to the new book.
- P. x.
- P. 6.
- Pp. x-xi.
- P. xi.
- I use the online version of the Encyclopedia Britannica, and only fall back on Wikipedia for information on popular culture that Britannica doesn't cover, and for which accuracy is not so important.
- As an example of an entry that contains novice errors, see: Wikipedia Watch, 10/22/2008. Interestingly, the "Talk" page for the Wikipedia entry discussed contains a good explanation of some of what's wrong with it, but the entry is still uncorrected after over a dozen years.
A Second Meeting of the New Logicians' Club
On this Independence Day, when you're through marching in a parade, eating hot dogs, and watching fireworks, here's a puzzle you can while away some of your free time on.
After attending my first meeting of the New Logicians' Club as a guest1, I decided to join. On the night of my first meeting as a member, the club was again playing the truth-tellers and liars game, which was the game where every member of the club was randomly assigned the role of either a truth-teller or a liar and required to answer every direct question accordingly.
Unfortunately, I arrived late for the meeting and, as a result, all the other members had already received their assignments as either truth-tellers or liars for the evening, so I didn't know who was what. Thankfully, I was assigned the role of truth-teller, and everything I tell you about that evening is true2.
The dinner had already started when I arrived and was seated at a round table with four other members of the club. I asked the member seated directly across from me―whose name was Arnauld, according to a tag on his lapel―whether he was a truth-teller or liar for the evening. He mumbled something inaudible to me because his mouth was full of food. Turning to the member seated next to him, whose name was apparently Bolzano, I asked what Arnauld had said.
"Arnauld said that he's not a liar", Bolzano replied. The member sitting next to me, whose name was Church, whispered to me: "But Arnauld was lying."
The fourth member at my table, named De Morgan, added: "Arnauld and Church are either both truth-tellers or both liars."
Finally, Arnauld swallowed his food and was able to speak audibly: "That's not true!" he blurted, glaring across the table at De Morgan.
How confusing! Can you help me determine which logicians were truth-tellers and which liars?
Extra Credit: What were the first names of those four logicians?
- Arnauld: truth-teller
- Bolzano: truth-teller
- Church: liar
- De Morgan: liar
Explanation: We saw in the first puzzle at the club1 that any member asked directly whether he or she is a truth-teller or a liar will always answer truth-teller and not a liar. Therefore, Bolzano was telling the truth when he said that Arnauld denied being a liar.
We also learned from the previous puzzle that a member of the club can only accuse another member of being a liar if the two members are opposites, that is, one is a truth-teller and one is a liar. Since Church accused Arnauld of lying, this means that they are opposites. However, De Morgan claimed that they were either both truth-tellers or both liars, so De Morgan is a liar. Finally, since Arnauld denied what De Morgan had just said, he must be a truth-teller, which makes Church a liar.
Extra Credit Solution: The four logicians were named: Antoine Arnauld, Bernard Bolzano, Alonzo Church, and Augustus De Morgan. If you were able to answer this question correctly, you are definitely New Logicians' Club material.
Disclaimer & Disclosure: Not everything I say is a lie, but this puzzle is. There is no New Logicians' Club, as far as I know, and even if there is I'm not a member.
- For the first meeting of the club, see: A Meeting of the New Logicians' Club, 5/30/2021
- Of course, if I were a liar, I would say the same thing!
Perpetrate or Perpetuate
Spell-checking programs are useful against some types of misspelling, but they won't catch everything. Nonetheless, a good one will catch most common misspellings, thus freeing you up to look for the rare ones. In particular, they won't catch a misspelling that just happens to spell a different word than that intended. For example, the word "led", which is the past tense of the verb "to lead", is often misspelled as "lead"1. Also, "parity" and "parody" are occasionally confused, and a spell checker probably won't notice2.
The words "perpetrate" and "perpetuate" are so similar in spelling, differing by only one letter, that they are difficult to distinguish at a glance. However, their meaning is very different: "to perpetrate" means to commit a crime, or some other bad action3. In contrast, "to perpetuate", means to make something perpetual, that is, to cause it to continue indefinitely4. Since both words are transitive verbs, it's unlikely that even a program that checks grammar will prevent you from confusing them. Given that both words are uncommon, you would expect that confusing them would be even less common, but it's common enough to have been warned against in at least two usage books5.
A little over a year ago, I noticed the following sentence in a professionally published book6: "One can even book a cabin on the annual Conspira-Sea Cruise, which allows passengers to not only heal from all of the conspiracies that have been perpetuated upon them but also watch for alien visitors in the night sky.7" The supposed conspiracies were allegedly perpetrated on the passengers, not perpetuated.
Recently, I came across the following confusing sentence in another book from a different professional publisher: "Guys like Omar helped bring fresh clean skins into the jihadi ranks and inspired those at home who were unable to get to places like Somalia, Yemen, Iraq, or Pakistan but were willing and able to perpetuate violence locally.8" With "perpetrate" substituted for "perpetuate", it's less confusing.
Please don't perpetuate the perpetration of this peccadillo.
Update, 7/19/2021: I was just doing some research on the Tamara Rand hoax of 1981 when I came across the following sentence in a statement by a television host who participated in it: "I have perpetuated a hoax on the public and feel very much ashamed.9" The hoax was perpetrated, not perpetuated, since it was quickly exposed. It's surprising to come across another example of this seemingly rare mistake in a little more than two weeks after writing the above entry. Also, in all three of these examples the mistake is in the same direction, namely, substituting the incorrect "perpetuate" for "perpetrate".
- See: Get the "Lead" Out, 2/5/2007.
- Parity or Parody, 3/18/2021.
- "Perpetrate", Cambridge Dictionary, accessed: 7/1/2021.
- "Perpetuate", Cambridge Dictionary, accessed: 7/1/2021.
- Bill Bryson, Bryson's Dictionary of Troublesome Words: A Writer's Guide to Getting it Right (2002)
- Harry Shaw, Dictionary of Problem Words and Expressions (Revised Edition, 1987)
- Conspiracy Theories: A Complete-Enough Picture, 6/17/2020.
- Joseph E. Uscinski, Conspiracy Theories: A Primer (Rowman & Littlefield, 2020), p. 5.
- Clint Watts, Messing With the Enemy: Surviving in a Social Media World of Hackers, Terrorists, Russians, and Fake News (Harper, 2018), p. 4; emphasis added.
- Myram Borders, "Hollywood psychic Tamara Rand's prediction of the attempted assassination…", United Press International, 4/5/1981.