
Previous Month | RSS/XML | Current
WEBLOG
July 5th, 2022 (Permalink)
Crack the Combination III
The combination of a lock is three digits long. The following are some incorrect combinations, each of which has one correct digit though it is in the wrong position:
- 283
- 625
- 032
- 368
Can you determine the correct combination from the above clues?
850
Explanation: Both 2 and 3 can be ruled out since each appears in three of the incorrect combinations in each of the three possible positions. Thus, 8 must be in the combination, by clue 1, but it can't be in the middle position or in the last position, by clue 4, so it's in the first position. Hence, 6 can be ruled out, also by clue 4. By a process of elimination, this leaves only 0 and 5. 5 can't be in the last position, by clue 2, so it must be in the middle. This leaves 0 in the last position.
WARNING: May cause mind-boggling.
July 2nd, 2022 (Permalink)
A Warning Sign1
Can you see what's wrong with the following sign? It took me a minute or two to spot it, so you might want to take some time to study it before reading on.

What is "visual damage"? Would it be damage to your eyes after getting squirted in the face with boiling water?
According to the sign, the "visual damage" would be to the "tap top assembly", whatever that is, rather than to your eyes. Presumably, what the sign maker meant was "visible" damage, that is, damage to the tap top assembly―or should that be the "top tap assembly"?―that is severe enough for you to see. Apparently, if the damage is not bad enough to see, you just shouldn't worry about it.
Surely, "visible damage" would have been a clearer and more precise wording, but is it actually wrong to use "visual" to mean "visible"? In general, "visual" as an adjective means "of or pertaining to vision"2, and visibility certainly pertains to vision. Some dictionaries give "visible" as one of the possible meanings of "visual"3. Moreover, none of the reference books on common errors that I usually check warns against confusing "visual" and "visible", though that could be because the confusion is uncommon.
One of the few dictionaries that I've found that argues against the "visible" meaning of "visual" is H. W. Fowler's dictionary of usage. Here's the entirety of Fowler's entry:
Visible means capable of being seen; visual means pertaining to seeing. The visual arts are concerned with the production of the beautiful in visible form, visually appreciated. This differentiation is sometimes obscured by the misuse of visual for visible, for which indeed dictionary authority can be found. But the differentiation is worth preserving. For instance the wrong word is used in the descriptive phrase Diagnosis by visual symptoms; the method of diagnosis is visual, but the symptoms are visible.4
Furthermore, while it's difficult to spot the mistake―if it is one―in the warning sign, in other contexts it's misleading to use "visual" to mean "visible". For instance, suppose that you were scalded by the hot water and complained that you couldn't see the sign because something obscured it. Would you say that the sign was "not visual" or that it was "not visible"? Surely, all signs are visual or they wouldn't be signs, but some are not visible because of obstructions.
Similarly, there's a recent movie called The Invisible Man, about a man who could not be seen. Could it have been titled "The Non-Visual Man", instead? That suggests to me a blind man, who cannot see, rather than one who cannot be seen.
Logically, the relation between "visual" and "visible" is that the former is the more general word, and the latter more specific. It may not be, strictly speaking, wrong to call damage to the hot water dispenser "visual", but "visible" is more specific, and thus more informative. So, whether or not it's a mistake to use "visual" to mean "visible", the distinction between the two words is worth observing.
Notes:
- Thanks to Lawrence Mayes for calling this issue to my attention, supplying the photograph of the sign, and for versions of the examples I use.
- "Visual", Cambridge Dictionary, accessed: 7/1/2022.
- For instance, the online Merriam-Webster dictionary gives "visible" as the third meaning of "visual"; see: "Visual", Merriam-Webster Dictionary, accessed: 7/1/2022.
- H. W. Fowler, A Dictionary of Modern English Usage (2nd edition, 1965), revised & edited by Sir Ernest Gowers; under "visible, visual."
![]()
Most online slot players have heard of the gamblers fallacy but we would suggest you simply do your homework before you play in order limit your risk. Sites like SlotsOnlineCanada are the go-to Canadian online slots portal on everything from new slot bonuses, slot game reviews and up-to-date news on the iGaming industry.
You will never be able to dispel the truth and reasoning behind the gamblers fallacy, however if you read these winning insights on pokies you may find that you gain a slight upper hand.
CryptoCasinos provides bitcoin casino guides for crypto interested gamblers!
If you want to play casino for free, you should check out freespinsnodeposituk.com for a complete list of casinos.
Gioca sui siti di slot online con le migliori slot online con soldi veri su NuoviCasinoItalia.it e ottieni il massimo delle tue giocate.
Casino Bonuses are not easy to find on the internet. There are simply too many and their terms and conditions makes them difficult to compare. You can find the best bonuses at casinopilot.
Don’t waste your time looking for worthy new online casinos, as https://newcasinouk.com/ already did all the hard work for you. Check out top lists with latest casinos on the market and register an account today.
You can find the best casinos at MrCasinova.com as this website update online casinos and compare them on daily basis.

June 30th, 2022 (Revised: 7/2/2022) (Permalink)
When More is Less & Who are the Experts?
- Gary Smith, "Believe in Science? Bad Big-Data Studies May Shake Your Faith", Bloomberg, 4/26/2022
The cornerstone of the scientific revolution is the insistence that claims be tested with data, ideally in a randomized controlled trial. … Today, the problem is not the scarcity of data, but the opposite. We have too much data, and it is undermining the credibility of science.
Luck is inherent in random trials. In a medical study, some patients may be healthier. In an agricultural study, some soil may be more fertile. In an educational study, some students may be more motivated. Researchers consequently calculate the probability (the p-value) that the outcomes might happen by chance. A low p-value indicates that the results cannot easily be attributed to the luck of the draw.
How low? In the 1920s, the great British statistician Ronald Fisher said that he considered p-values below 5% to be persuasive and, so, 5% became the hurdle for the “statistically significant” certification needed for publication, funding and fame.
It is not a difficult hurdle. Suppose that a hapless researcher calculates the correlations among hundreds of variables, blissfully unaware that the data are all, in fact, random numbers. On average, one out of 20 correlations will be statistically significant, even though every correlation is nothing more than coincidence.
There's nothing magical about Fisher's 5% p-value for statistical significance. If, for some reason, you are checking so many variables for correlations, you need to use a lower p-value as the threshold for significance, say, 1%―this is known as adjusting for multiple comparisons. See the Multiple Comparisons Fallacy for more.
Real researchers don’t correlate random numbers but, all too often, they correlate what are essentially randomly chosen variables. This haphazard search for statistical significance even has a name: data mining. As with random numbers, the correlation between randomly chosen, unrelated variables has a 5% chance of being fortuitously statistically significant. Data mining can be augmented by manipulating, pruning and otherwise torturing the data to get low p-values.
The phrase "data mining" isn't always a pejorative: often it's used to refer to searching large sets of data for patterns. However, manipulating, pruning, and torturing data is usually called "p-hacking", which is always pejorative.
To find statistical significance, one need merely look sufficiently hard. Thus, the 5% hurdle has had the perverse effect of encouraging researchers to do more tests and report more meaningless results. Thus, silly relationships are published in good journals simply because the results are statistically significant. …
A team led by John Ioannidis looked at attempts to replicate 34 highly respected medical studies and found that only 20 were confirmed. The Reproducibility Project attempted to replicate 97 studies published in leading psychology journals and confirmed only 35. The Experimental Economics Replication Project attempted to replicate 18 experimental studies reported in leading economics journals and confirmed only 11.
A scientific result has no claim to our belief until it has been independently replicated, so beware of those who cite "the latest study" or "the most recent research", since the latest research has probably not been replicated.
I wrote a satirical paper that was intended to demonstrate the folly of data mining. I looked at Donald Trump’s voluminous tweets and found statistically significant correlations between: Trump tweeting the word “president” and the S&P 500 index two days later; Trump tweeting the word “ever” and the temperature in Moscow four days later; Trump tweeting the word “more” and the price of tea in China four days later; and Trump tweeting the word “democrat” and some random numbers I had generated.
I concluded — tongue as firmly in cheek as I could hold it — that I had found “compelling evidence of the value of using data-mining algorithms to discover statistically persuasive, heretofore unknown correlations that can be used to make trustworthy predictions.”
I naively assumed that readers would get the point of this nerd joke: Large data sets can easily be mined and tortured to identify patterns that are utterly useless. I submitted the paper to an academic journal and the reviewer’s comments demonstrate beautifully how deeply embedded is the notion that statistical significance supersedes common sense: “The paper is generally well written and structured. This is an interesting study and the authors have collected unique datasets using cutting-edge methodology.”
It is tempting to believe that more data means more knowledge. However, the explosion in the number of things that are measured and recorded has magnified beyond belief the number of coincidental patterns and bogus statistical relationships waiting to deceive us.
If the number of true relationships yet to be discovered is limited, while the number of coincidental patterns is growing exponentially with the accumulation of more and more data, then the probability that a randomly discovered pattern is real is inevitably approaching zero.
The problem today is not that we have too few data, but that we have too much data, which seduces researchers into ransacking it for patterns that are easy to find, likely to be coincidental, and unlikely to be useful.
Okay, but let's not throw out the "big data" with the p-hacking.
The following article is rather long, but worth reading as a whole, especially if you're interested in the problem of how to evaluate expertise. I have a great deal to say about that subject, and will frequently interrupt the author below to comment. So, you might want to read the whole article before perusing my commentary.
- Oliver Traldi, "With All Due Respect to the Experts", American Compass, 5/20/2022
A few weeks before Donald Trump’s inauguration as President, the New Yorker published a cartoon depicting a mustached, mostly bald man, hand raised high, mouth open in a sort of improbable rhombus, tongue flapping wildly within, saying: “These smug pilots have lost touch with regular passengers like us. Who thinks I should fly the plane?” The tableau surely elicited many a self-satisfied chuckle from readers disgusted by the populist energy and establishment distrust that they perceived in Trump’s supporters.
The cartoon is reminiscent of Plato's parable of the ship of state from The Republic:
Imagine…a ship in which there is a captain who is taller and stronger than any of the crew, but he is a little deaf and has a similar infirmity in sight, and his knowledge of navigation is not much better. The sailors are quarrelling with one another about the steering―every one is of opinion that he has a right to steer, though he has never learned the art of navigation and cannot tell who taught him or when he learned, and will further assert that it cannot be taught, and they are ready to cut in pieces any one who says the contrary. They throng about the captain, begging and praying him to commit the helm to them; and if at any time they do not prevail, but others are preferred to them, they kill the others or throw them overboard, and having first chained up the noble captain's senses with drink or some narcotic drug, they mutiny and take possession of the ship and make free with the stores; thus, eating and drinking, they proceed on their voyage in such a manner as might be expected of them. Him who is their partisan and cleverly aids them in their plot for getting the ship out of the captain's hands into their own whether by force or persuasion, they compliment with the name of sailor, pilot, able seaman, and abuse the other sort of man, whom they call a good-for-nothing; but that the true pilot must pay attention to the year and seasons and sky and stars and winds, and whatever else belongs to his art, if he intends to be really qualified for the command of a ship, and that he must and will be the steerer, whether other people like or not―the possibility of this union of authority with the steerer's art has never seriously entered into their thoughts or been made part of their calling.1
Traldi's criticism of the cartoon―and, presumably, of Plato's parable, as well, if he were to criticize it―is that it involves a weak analogy:
But what exactly is the joke here? Citizens in a democracy are not akin to airline passengers, buckled quietly into their seats and powerless to affect change, their destinations and very lives placed in the hands of professionals guarded by a reinforced door up front. Even brief reflection reveals the cartoonist’s analogy to be comparing like to unlike.
That none of us thinks we know better than a plane’s captain, yet we often think we know better than experts in matters of politics, suggests differences between those domains. And it highlights a vexing problem for modern political discourse and deliberation: We need and value expertise, yet we have no foolproof means for qualifying it. To the contrary, our public square tends to amplify precisely those least worthy of our trust. How should we decide who counts an expert, what topics their expertise properly addresses, and which claims deserve deference?
We all rely upon experts. When something hurts, we consult a doctor, unless it’s a toothache, in which case we go to a dentist. We trust plumbers, electricians, and roofers to build and repair our homes, and we prefer that our lawyers and accountants be properly accredited. Some people attain expertise through training, others through experience or talent. … In all these cases, our reliance on expertise means suspending our own judgment and placing our trust in another—that is, giving deference. But we defer in different ways and for different reasons. The pilot we choose not to vote out of the cockpit has skill, what philosophers sometimes call “knowledge how.” We need the pilot to do something for us, but if all goes well we need not alter our own beliefs or behaviors on his say so. At the other extreme, a history teacher might do nothing but express claims, the philosopher’s “knowledge that,” which students are meant to adopt as their own beliefs. Within the medical profession, performing surgery is knowledge-how while diagnosing a headache and recommending two aspirin as the treatment is closer to knowledge-that.
"Knowledge how"―or, more familiarly, "know-how"―is knowledge of how to do something, that is, a skill. "Knowledge that" is propositional knowledge: knowing the truth or falsity of a statement. Both types of knowledge are important, but philosophers have tended to ignore know-how in favor of propositional knowledge. Plato may have been the first to do so, since his teacher Socrates often used craftsmen as paradigms of knowledge.
Traldi now asks an important question concerning expertise:
…[H]ow are those without expertise to determine who has it? Generally, we leave that determination to each individual. A free society and the free market allow for widely differing judgments about who to trust about what, with credentialing mechanisms in place to facilitate signaling and legal consequences for outright fraud. …
In the public square, we have historically placed our trust in a similar phenomenon, the wisdom of crowds. In the early 19th century, the Marquis de Condorcet codified the thesis that citizens who know less than experts can together generate a system that knows more. Condorcet’s Jury Theorem2 demonstrates a simple consequence of the law of large numbers: If voters are more likely to vote correctly than incorrectly and their votes are statistically independent of each other, then as the number of voters increases, the probability that voters get the right result approaches 100%. In large enough numbers, thinking for themselves, the vox populi will make the right decisions.
There are a lot of problems with applying this theorem to juries, let alone to democracies as a whole: What is it to "vote correctly" on a jury? What basis do we have for thinking that individual jurors are more likely to vote correctly than incorrectly? Isn't it likely that some jurors are more likely to vote correctly, but others incorrectly? Moreover, the same theorem leads to the conclusion that if the jurors are more likely to vote incorrectly, then as the number of jurors increases, the probability of an incorrect verdict approaches 100%.
If we took the application of this theorem to juries seriously, then we should increase the size of juries, since the larger the jury the more likely that its judgment will be correct―always assuming that each juror is more likely to be correct than incorrect. While contemporary juries are usually twelve, but sometimes half that, the juries in ancient Athens were much larger. In fact, 500 jurors convicted Socrates and sentenced him to death3.
Also, the assumption that the votes of jurors or citizens of a democracy are statistically independent is certainly wrong, at least as trials and elections are currently conducted. For the votes of jurors to be independent, it would have to be the case that no juror influenced the vote of any other, but we expect that juries will deliberate before voting. If we took the theorem seriously, we would need to keep the jurors separate so that they could not influence one another. Similarly, we expect that citizens of a democracy will discuss and debate before an election, whereas the theorem would indicate that this should be forbidden.
So, Condorcet's theorem, while obviously true as a result in probability theory, simply doesn't apply to juries or electorates made up of fallible human beings. If we want to get along without experts in trials and elections, we'll have to look elsewhere.
This rational populism has not been sitting well with the expert class, which finds democracy an inconvenient obstacle to technocratic rule. Thus the recent emphasis on “cognitive biases,” which treat the typical citizen as not only a non-expert himself, but also incapable of identifying the real experts or aggregating his opinions with other non-experts to achieve a reasoned result. There is confirmation bias, negativity bias, status quo bias, “tribalism,” and so on. Particularly en vogue are the “implicit biases” of identity and diagnoses of “racial resentment” and the “authoritarian personality”, which were used to explain the results of the 2016 election. Constructs like these provide the backdrop for expert dismissals of disfavored political views. Alongside the technically framed analyses of how people misprocess correct information comes an assumption that they are also cheaply programmable, easily gulled by puppeteer propagandists and “Fake News.” Only the experts, then, can tell us who is truly an expert. Only someone untrustworthy would not trust them.
Many straw men are assaulted in this passage. At the very least, it's a caricature of those of us who take cognitive biases seriously. I, for one, accept the existence of biases, but reject the notion that anyone should just blindly trust experts. For one thing, cognitive biases are human biases, which means that experts suffer from them as much as non-experts. Secondly, no one is expert in everything, so we're all non-experts in most fields. Finally, there are no experts in identifying experts, but suppose that there were. How is the non-expert supposed to identify such an expert? The problem of identifying expertise is a problem for all of us, expert and non-expert alike, and "trust the experts" is no solution. Who are the experts we're supposed to trust? That's the problem!
…[T]he expert critique of common sense and the crowd’s wisdom is wrong. Ironically, the expert critique more effectively serves as a critique of experts. More so than the population, they appear susceptible to motivated reasoning and belief cascades. …[C]ommon sense remains an essential part of the expert’s arsenal. The apparent expert who abandons it may end up worse off than the non-expert.
While experts are human, and thus as susceptible to human biases as anyone, I see no evidence here that they are more so.
Experts are also susceptible to processes that arbitrarily reinforce an unexamined consensus, both ex ante and ex post. … The consensus view is a prerequisite for qualifying as an expert, not a considered consequence of one’s genuine expertise. …
There's a third possibility: that one must accept the consensus view to qualify as an expert, and that it is also an expert conclusion. For instance, I doubt seriously that a person who believes that the earth is flat would be able to get a doctorate in astronomy from an accredited institution.
Once qualified, experts remain susceptible to the related phenomenon of belief cascades as new questions emerge, because they place their standing at risk if they depart from the views of fellow experts. (After all, what could threaten the enterprise more than the revelation that expertise does not dictate a particular conclusion?) When one expert, or a small number of experts, expresses a certain view—especially on a politically charged topic—that view quickly propagates through social networks both formal and informal, public and private, and becomes widely held on the assumption that experts can be trusted. What appears a broad-based consensus among people who have thought about the issue is really only the overamplified view of the few people who have thought about it at all.
This appears to be a description of the phenomenon of group think, but the non-expert population is just as likely to be affected by it. Scarcely a day passes without some faddish delusion spreading like wildfire via "social" media.
Moreover, experts do depart from the views of their fellows, as we have seen the last couple of years. They do run some risk of having their careers damaged, and some experts are probably silent for fear of such risks. This has happened throughout scientific history, and it's a shame that it still happens. Nonetheless, we have seen experts brave those risks, and some have been vindicated in short order.
Yet another problem for our experts is that the source, nature, and relevance of their expertise is often ill-defined. … Even when expertise is genuine, disciplines and professions, along with their practitioners, seem determined to overextend its breadth for purposes of laundering their personal, non-expert opinions under their expert brand. …
I've left out here some examples of experts opining outside of their field, not because I disagree with the examples―I don't―but you can read them for yourself. Also, there's no doubt that some experts tend to over-extend themselves, but this is one of the easiest problems for the non-expert to spot: all that you need to know is what the expert's domain of expertise is, and to what domain the expert's opinion belongs. If the two domains are not the same, then the expert's opinion is no better than any other non-expert.
…[T]his…illustrates perhaps the modern expert’s biggest problem: To become visible enough that non-experts can find him, he must proffer his views through Twitter, on talk show interviews, and in essays for magazines…. Such platforms select for certain experts, and certain views. Laypeople encountering an expert “in the wild” have no reason to think that he is representative of his discipline and every reason to think the opposite. …
The public will never be able to assess the validity of expertise on a case-by-case basis. Trying yields widely varied conclusions and thus eliminates any common starting point from which to conduct public debates—roughly the situation today. Assessing apparent expertise requires knowledge of a field’s inner workings, something almost no one has the time or inclination to learn. From the outside, it is difficult to infer what dogmas might contaminate a discipline’s standard training or what pressures might distort processes of hiring, promotion, and socialization.
This sounds discouraging, but see the next sentence:
However, some general heuristics and defaults might provide a basis for at least some agreement. First, a simple conflict-of-interest standard would make sense. Look at what people gain from giving their views, and from whom they gain it. Someone who stands to gain more personally from one view than from another should not be entitled to deference when offering the former. That does not mean the view is wrong, only that it must be defended on its merits rather than based on the identity of the speaker. …
Now we're getting somewhere. Enough with the expert-bashing; it may be fun, but it's not advancing us towards a solution of the problem.
I agree that we should watch out for conflicts of interest, but would add that every view must be defended on its merits, and not on the identity of the person who advances it. Even experts―especially experts―should defend the views they advance in this way, and not simply ask that we trust them.
Second, political stances should be inherently suspect. Experts can offer knowledge useful in evaluating the values-laden tradeoffs of politics and public policy, but that expertise does not make their judgment superior to that of any other citizen, and certainly not the democratic determination of a large group of citizens. …
Third, we should be far more skeptical of claims of knowledge-that expertise than of knowledge-how. In the latter case, people’s claims of expertise can be substantiated by their ability to deliver objective results. The surgeon with a track record of successful surgeries is easily distinguishable from the charlatan with none. Knowledge-that experts, by contrast, are laying claim to the truth. Sometimes they have it, and are guiding us as reliably as a pilot. Other times they are simply taking us for a ride.
It's true that one way to check the know-how of supposed experts is by examining their work, but this is not always easy since it may require knowledge to tell the difference between good and bad work. Also, it is often possible to check the propositional knowledge claims of alleged experts, especially when they make predictions. If supposed experts make false predictions, we should mistrust their future predictions. In addition, false predictions call into question the theories that they're based on.
Despite Traldi's pessimism, this article shows that it is possible to make progress in identifying both expertise and when to rely on it. This will always be a difficult task, and mistakes are inevitable, but we are all forced by the complexities of the world to rely on the knowledge of others. Even Traldi admits this when he's not bashing experts. So, there's no alternative to doing the best that we can and, hopefully, we will muddle through.
The first article is by Gary Smith, the author of Standard Deviations, a book I've reviewed and recommend―see the Book Shelf page in the Main Menu to your left.
Notes:
- Plato. The Republic, 488A-489A; Jowett's translation.
- Eric W. Weisstein, "Condorcet's Jury Theorem", Wolfram's MathWorld, accessed: 6/29/2022.
- Douglas O. Linder, "Criminal Procedure in Ancient Greece and the Trial of Socrates", Famous Trials, accessed: 6/29/2022.
Disclaimer: I don't necessarily agree with everything in these articles, but I think they're worth reading as a whole. In abridging them, I may have changed the paragraphing and rearranged the order of the excerpts in order to emphasize points.
June 26th, 2022 (Permalink)
Inflation and "Record-High" Gas Prices
Inflation in America increased to 8.6% last month, a level that we haven't experienced for forty years. An important aspect of that inflation is the price of gasoline, which has risen at a rate much higher than the overall rate of inflation, increasing by nearly 50% over the last year1. You don't need the news media to tell you that the cost of gasoline has gone up a lot this year: just fill up your gas tank and you'll be painfully aware of it. Nonetheless, the news media keep referring to "record" or "record-high" gasoline prices2. In what sense is the price a "record"?
The claim that gas prices are setting new records is based on an average of prices nationwide compiled by the American Automobile Association (AAA). According to the AAA, the current national average price for a gallon of regular unleaded gasoline is $4.903. The "record high" was actually set on the 14th of this month at $5.016, so it's not now at a record-setting price, but only about a dime away.
What the news media don't usually mention is that AAA's average measures nominal gas prices, that is, simply the price on the pump unadjusted for inflation. Inflation is money losing value over time―which it's been doing unusually fast for the last several months―so that a dollar today is not worth what it was yesterday. A dollar this year will buy less gas than it would have last year, let alone ten, fifty, or one-hundred years ago.
Given inflation, comparing prices from many years apart is comparing apples to oranges or, for a less hackneyed and fruity analogy, it's like comparing prices in American dollars with prices in Canadian dollars without taking the exchange rate into consideration. "The past is a foreign country", as L. P. Hartley wrote4. So, to compare today's prices to those many years ago, you should adjust for inflation5.
Adjusting for inflation, the previous record was set in 2008, when prices averaged $4.116, which is $5.49 in today's dollars7. Inflation is running so high currently that it's entirely possible that we'll see inflation-adjusted prices exceed $5.49 in the near future, which would be a real as opposed to a merely nominal record price.
Notes:
- Aimee Picchi, "Inflation surged 8.6% over the last year — fastest since 1981", CBS News, 6/10/2022.
- For just one example: Chris Isidore, "Average US gas price hits $5 for first time", CNN, 6/13/2022.
- Gas Prices
June 9th, 2022 (Permalink)
How to Get a Correction or Retraction in Ten Easy Steps
If, in the course of amateur fact-checking, you discover a factual error in a publication, what should you do then? You've received the benefit of not being misled by the error, but what about other readers or viewers who will not make the effort to check the mistaken claim? My suggestion is that you request a correction or retraction from the source that made the mistake, so that unwary people will not be misinformed.
What's the difference between a correction and a retraction? In a correction, the article itself may be edited to remove or correct the error, and usually a notice at the top or bottom of the article will notify the reader that a correction has been made. Sometimes, the article itself will not be corrected, but a correction appended at the beginning or end. Other times, the notice of a correction will appear on a separate corrections page, though this is not good internet practice.
A retraction is more drastic than a correction, and you are unlikely to get one for a single mistake unless it undermines the thesis of the article. In a retraction, the entire article will be removed, with perhaps a note replacing it that explains the retraction. Articles are most likely to be retracted for extensive plagiarism or fabricated information rather than easily correctable errors.
- Don't request a correction over a difference of opinion: Only make such a request when a publication has committed a checkable factual error. If you're not sure whether something is a matter of opinion or of fact, then review the previous entries in this series on fact-checking1. If you're not sure whether something is a factual error, or whether it's a factual error, then don't ask for a correction. Be sure that you're on solid ground before contacting a publication. If it is a difference of opinion, then there are many ways that you can challenge the publication's claims: send a letter to the editor, add a comment to the article, or write and publish your own response. Don't waste your and the publication's time by demanding the correction of an opinion.
- Request a correction first: Have the courtesy to contact a publication and request a correction or retraction before you publicly criticize it for a mistake. Give it a chance to do the right thing. This warning includes adding a public comment to an article if the publication allows such a thing, so don't use such comments as a way to try to get a mistake corrected or an article retracted.
- Be polite: If you want to get a correction or retraction, don't insult the readers of your request or the publication for which they work. Don't call them ignoramuses, fools, or worse―they may actually be ignorant fools, but don't say so. Assume that they want to get it right. Don't use sarcasm to suggest that they are idiots, or that it is unlikely that they will honor your request―there's no better way to get them to ignore you. If you violate this rule your request is most likely to end up in the trash.
- Don't curse: This, of course, is part of politeness, but it may need special emphasis nowadays. If you curse at the person reading your request or the publication the person works for, your request will justifiably go in the trash.
- Be specific: Describe the error you want corrected exactly and precisely. If you just have some vague feeling that an article is mistaken, then you're not going to get a correction anyway, so don't bother asking for one.
- Be able to prove your case: Don't request a correction or a retraction unless and until you can prove the publication committed a mistake beyond a reasonable doubt. This is an unfair standard, but it is likely to be the one that you'll be held to. If there's any way for a publication to wriggle out of the need to correct or retract something, it will usually try to do so. Publications do not like to issue corrections or, especially, retractions. So, you need to have such a solid case that there's no wiggle room. If you can't prove your case, you can still request a correction or retraction, but don't expect one.
- Don't hold your breath: As I mentioned above, publications don't like to make public corrections, let alone retractions. This is true―perhaps especially true―of even the most prestigious and reputable institutions. So, don't be surprised if your request is silently rejected.
- If your request is granted, thank the publication and its representative: This, of course, is also a matter of politeness. However, we need to encourage publications to admit error and correct the public record, so thank them when they do so! Anybody can make a mistake, but they did the right thing despite the likelihood of public embarrassment, so they deserve praise and reward for doing so.
- If your request is denied or ignored, don't demand that your subscription be cancelled: In the lapidary words of William F. Buckley, Jr.: "Cancel your own goddam subscription!"2
- Go public: If you did all of the above, and the publication still does not correct or retract its mistake, publicly embarrass it! There are, of course, many ways that you can lay your case before the public. About the only thing a publication likes less than issuing corrections or retractions is being publicly shamed for getting something wrong. If the publication allows comments to its articles, you can add your correction of it to the comments. You can contact a rival publication, especially one with a different political slant, which may be eager to point out the mistakes of its competitor. Just as we need to reward those who do the right thing, we need to punish those who do not. Let's make it easier and less painful to admit error than not to.
Notes:
- In chronological order:
- Why You Need to be Able to Check Facts, 9/8/2020
- Fact-checking Vs. Nit-picking, 10/20/2020
- Four Types of Misleading Quote, 11/27/2020
- News Sources Vs. Familiar Quotations, 12/4/2020
- Rules of Thumb, 1/2/2021
- A Case Study, 2/4/2021
- Reliable Sources, 3/2/2021
- Fact-Checking: What is a fact?, 4/29/2021
- Sources for Fact-Checking: Primary, Secondary & Tertiary, 5/6/2021
- Fact-Checkers ≠ Lie-Detectors, 8/27/2021
- Fact Checks, Vast Majorities, and Outright Falsehoods, 10/16/2021
- "Everyone is entitled to his own opinion, but not his own facts.", 11/23/2021
- Fact Checking the Future and the Future of Fact Checking, 12/5/2021
- William F. Buckley, Jr., Cancel Your Own Goddam Subscription: Notes and Asides from National Review (2007)

June 7th, 2022 (Permalink)
Crack the Combination II
The combination of a lock is three digits long. Here are some incorrect combinations:
- 054: Two of the digits are correct, one is in the right position, but the other is in the wrong place.
- 754: Two digits are correct but both are in the wrong positions.
- 742: Two of the digits are correct, one is in the right position, but the other is in the wrong place.
Can you determine the correct combination from the above clues?
047
Explanation: 4 must be part of the combination since this is the only digit that clues 1 & 3 share. 4 is not in the final position, since it is in that position in clue 2. 5 is not in the middle position, by clues 1 & 2, since it cannot be in both the right and wrong position, so 0 must be in the first position. Thus, 5 is not in the combination, by clue 1. Therefore, the full combination must be 047, by clue 2 and previous results.
WARNING: May cause brain-teasing.
June 4th, 2022 (Permalink)
Cite or Site?
A report in the The New York Times from fifty years ago about a water main break contained the following sentence: "A portable toilet unit on the construction cite also fell into the hole in the street.1" The toilet, of course, was on a construction site.
"Cite" is a verb, most commonly occurring in scholarly writing, which means to point to a source of supposed evidence for a claim or a quote2. In this weblog, I often cite sources for the information and quotes that I write about.
In contrast, "site" is usually a noun meaning "place", as in "web site" or "construction site"3. Given that the two words are pronounced identically, they are easy to confuse. Oddly, only two of the books I usually check, and sometimes cite, warn against such confusion4, though it seems to be a common error. In my experience, the most common mistake is to misspell "cite" as "site", though the confusion obviously can go in the opposite direction, witness the Times example.
Since "cite" is a verb and "site" is usually a noun, it's possible that a grammar checking program will catch confusion of one for the other. However, "site" can also be used as a verb meaning "to place", which means that a grammar checker may not catch it. My old copy of Microsoft's Word program flagged "cite" in the above example, and one online program automatically changed it to "site", though another did not. So, I would suggest that you test whatever program you usually use to see whether it would catch this mistake. If not, you can either upgrade your program or add this distinction to your mental software.
Notes:
- Martin Gansberg, "Subway Flooded by a Broken Main", The New York Times, 9/28/1970. I found this example in the following article: "'Cite' vs. 'Site' vs. 'Sight'", Merriam-Webster, accessed: 6/4/2022.
- "Cite"
- Mignon Fogarty, Grammar Girl's 101 Misused Words You'll Never Confuse Again (2011), p. 29
- Robert J. Gula, Precision: A Reference Handbook for Writers (1980), p. 209
![]()
Only licensed online casinos at Toponlinecasinoaustralia.com site! Register and claim bonuses now!
Find out all the best online casino games for arab players
If you're looking for an extensive review of the best sports betting games in the Philippines today, check out 22bet review on casinoonline.com.ph complete with, attractive bonus offers, and responsive customer support.
Get the top-notch online casino entertainment available in the Philippines when you visit onlinegambling.com.ph
Read and write reviews of Casino Gods at TheCasinoDB.