The puzzle: Why do scientists typically respond to legitimate scientific criticism in an angry, defensive, closed, non-scientific way? The answer: We’re trained to do this during the process of responding to peer review.

[image of Cantor’s corner] Here’s the “puzzle,” as we say in social science. Scientific research is all about discovery of the unexpected: to do research, you need to be open to new possibilities, to design experiments to force anomalies, and to learn from them. The sweet spot for any researcher is at Cantor’s corner. (See here for further explanation of the Cantor connection.) Buuuut . . . researchers are also notorious for being stubborn. In particular, here’s a pattern we see a lot:– Research team publishes surprising result A based on some “p less than .05” empirical results.– This publication gets positive attention and the researchers and others in their subfield follow up with open-ended “conceptual replications”: related studies that also attain the “p less than .05” threshold.– Given the surprising nature of result A, it’s unsurprising that other researchers…
Original Post: The puzzle: Why do scientists typically respond to legitimate scientific criticism in an angry, defensive, closed, non-scientific way? The answer: We’re trained to do this during the process of responding to peer review.

The retraction paradox: Once you retract, you implicitly have to defend all the many things you haven’t yet retracted

Mark Palko points to this news article by Beth Skwarecki on Goop, “the Gwyneth Paltrow pseudoscience empire.” Here’s Skwarecki: When Goop publishes something weird or, worse, harmful, I often find myself wondering what are they thinking? Recently, on Jimmy Kimmel, Gwyneth laughed at some of the newsletter’s weirder recommendations and said “I don’t know what the fuck we talk about.” . . . I [Skwarecki] . . . end up speaking with editorial director Nandita Khanna. “You publish a lot of things that are outside of the mainstream. What are your criteria for determining that something is safe and ethical to recommend?” Khanna starts by pointing out that they include a disclaimer at the bottom of health articles. This is true. It reads: The views expressed in this article intend to highlight alternative studies and induce conversation. They are the…
Original Post: The retraction paradox: Once you retract, you implicitly have to defend all the many things you haven’t yet retracted

Alzheimer’s Mouse research on the Orient Express

Paul Alper sends along an article from Joy Victory at Health News Review, shooting down a bunch of newspaper headlines (“Extra virgin olive oil staves off Alzheimer’s, preserves memory, new study shows” from USA Today, the only marginally better “Can extra-virgin olive oil preserve memory and prevent Alzheimer’s?” from the Atlanta Journal-Constitution, and the better but still misleading “Temple finds olive oil is good for the brain — in mice” from the Philadelphia Inquirer) which were based on a university’s misleading press release. That’s a story we’ve heard before. The clickbait also made its way into traditionally respected outlets Newsweek and Voice of America. And NPR, kinda. Here’s Joy Victory: It’s pretty great clickbait—a common, devastating disease cured by something many of us already have in our pantries! . . . To deconstruct how this went off the rails, let’s…
Original Post: Alzheimer’s Mouse research on the Orient Express

It’s . . . spam-tastic!

We’ll celebrate Christmas today with a scam that almost fooled me. OK, not quite: I was about two steps from getting caught. Here’s the email: Dear Dr. Gelman, I hope you do not mind me emailing you directly, I thought it would be the easiest way to make first contact. If you have time for a short discussion I was hoping to speak with you about your studies and our interest to feature your work in a special STEM issue of our publication, Scientia. I will run you through this in more detail when we talk. But to give you a very quick insight into Scientia and the style in which we publish, I have attached a few example articles from research groups we have recently worked with. I have attached these as HTML files to reduce the file size,…
Original Post: It’s . . . spam-tastic!

The piranha problem in social psychology / behavioral economics: The “take a pill” model of science eats itself

[cat picture] A fundamental tenet of social psychology, behavioral economics, at least how it is presented in the news media, and taught and practiced in many business schools, is that small “nudges,” often the sorts of things that we might not think would affect us at all, can have big effects on behavior. Thus the claims that elections are decided by college football games and shark attacks, or that the subliminal flash of a smiley face can cause huge changes in attitudes toward immigration, or that single women were 20% more likely to vote for Barack Obama, or three times more likely to wear red clothing, during certain times of the month, or that standing in a certain position for two minutes can increase your power, or that being subliminally primed with certain words can make you walk faster or…
Original Post: The piranha problem in social psychology / behavioral economics: The “take a pill” model of science eats itself

Yes, you can do statistical inference from nonrandom samples. Which is a good thing, considering that nonrandom samples are pretty much all we’ve got.

Luiz Caseiro writes: 1. P-values and Confidence Intervals are used to draw inferences about a population from a sample. Is that right? 2. As far as I researched, standard statistical softwares usually compute confidence intervals (CI) and p-values assuming that we have a simple random sample. Is that right? 3. If we have another kind of representative sample, different from a simple random sample (i.e. a complex sample), we should take into account our sample design before calculating CI and p-values. Is that right? 4. If we do not have a representative sample, as it is often the case in political science (specially when the sample is a convenience sample, made of some countries for which data is available), would not it be irrelevant and even misleading to report CI and p-values? This question comes up from time to time…
Original Post: Yes, you can do statistical inference from nonrandom samples. Which is a good thing, considering that nonrandom samples are pretty much all we’ve got.

The “80% power” lie

OK, so this is nothing new. Greg Francis said it, and Uri Simonsohn said it, Ulrich Schimmack said it, lots of people have said it. But it’s worth saying again. To get NIH funding, you need to demonstrate (that is, convincingly claim) that your study has 80% power. I hate the term “power” as it’s all tied into the idea of the goal of a study being statistical significance. But let’s set that aside for now, and just do the math, which is that with a normal distribution, if you want an 80% probability of your 95% interval excluding zero, then the true effect size has to be at least 2.8 standard errors from zero. All right, then. Suppose we really were running studies with 80% power. In that case, the expected z-score is 2.8, and 95% of the time…
Original Post: The “80% power” lie

Popular expert explains why communists can’t win chess championships!

Popular expert explains why communists can’t win chess championships! Posted by Andrew on 4 December 2017, 9:17 am [cat picture] We haven’t run any Ray Keene material for awhile but this is just too good to pass up: Yup, those communists have real trouble pushing to the top when it comes to chess, huh? P.S. to Chrissy: If you happen to be reading this, my advice to you is to not take stuff from Keene. For one thing, he doesn’t seem to know what he’s talking about; for another, it’s risky to use anything written by a plagiarist, because you can’t be sure who you’re copying from. I recommend you stick to repurposing material you find from quality chess writers such as Tim Krabbé.
Original Post: Popular expert explains why communists can’t win chess championships!

The Night Riders

Retraction Watch linked to this paper, “Publication bias and the canonization of false facts,” by Silas Nissen, Tali Magidson, Kevin Gross, and Carl Bergstrom, and which is in the Physics and Society section of Arxiv which is kind of odd since it has nothing whatsoever to do with physics. Nissen et al. write: In the process of scientific inquiry, certain claims accumulate enough support to be established as facts. Unfortunately, not every claim accorded the status of fact turns out to be true. In this paper, we model the dynamic process by which claims are canonized as fact through repeated experimental confirmation. . . . In our model, publication bias—in which positive results are published preferentially over negative ones—influences the distribution of published results. I don’t really have any comments on the paper itself—I’m never sure when these mathematical models…
Original Post: The Night Riders

The time reversal heuristic (priming and voting edition)

Ed Yong writes: Over the past decade, social psychologists have dazzled us with studies showing that huge social problems can seemingly be rectified through simple tricks. A small grammatical tweak in a survey delivered to people the day before an election greatly increases voter turnout. A 15-minute writing exercise narrows the achievement gap between black and white students—and the benefits last for years. “Each statement may sound outlandish—more science fiction than science,” wrote Gregory Walton from Stanford University in 2014. But they reflect the science of what he calls “wise interventions” . . . They seem to work, if the stream of papers in high-profile scientific journals is to be believed. But as with many branches of psychology, wise interventions are taking a battering. A new wave of studies that attempted to replicate the promising experiments have found discouraging results.…
Original Post: The time reversal heuristic (priming and voting edition)