Statistical behavior at the end of the world: the effect of the publication crisis on U.S. research productivity

Under the heading, “I’m suspicious,” Kevin Lewis points us to this article with abstract: We exploit the timing of the Cuban Missile Crisis and the geographical variation in mortality risks individuals faced across states to analyse reproduction decisions during the crisis. The results of a difference-in-differences approach show evidence that fertility decreased in states that are farther from Cuba and increased in states with more military installations. Our findings suggest that individuals are more likely to engage in reproductive activities when facing high mortality risks, but reduce fertility when facing a high probability of enduring the aftermath of a catastrophe. It’s the usual story: forking paths (nothing in the main effect, followed by a selection among the many many possible two-way and three-way interactions that could be studied), followed by convoluted storytelling (“individuals indulge in reproductive activities when facing high…
Original Post: Statistical behavior at the end of the world: the effect of the publication crisis on U.S. research productivity

Alzheimer’s Mouse research on the Orient Express

Paul Alper sends along an article from Joy Victory at Health News Review, shooting down a bunch of newspaper headlines (“Extra virgin olive oil staves off Alzheimer’s, preserves memory, new study shows” from USA Today, the only marginally better “Can extra-virgin olive oil preserve memory and prevent Alzheimer’s?” from the Atlanta Journal-Constitution, and the better but still misleading “Temple finds olive oil is good for the brain — in mice” from the Philadelphia Inquirer) which were based on a university’s misleading press release. That’s a story we’ve heard before. The clickbait also made its way into traditionally respected outlets Newsweek and Voice of America. And NPR, kinda. Here’s Joy Victory: It’s pretty great clickbait—a common, devastating disease cured by something many of us already have in our pantries! . . . To deconstruct how this went off the rails, let’s…
Original Post: Alzheimer’s Mouse research on the Orient Express

“However noble the goal, research findings should be reported accurately. Distortion of results often occurs not in the data presented but . . . in the abstract, discussion, secondary literature and press releases. Such distortion can lead to unsupported beliefs about what works for obesity treatment and prevention. Such unsupported beliefs may in turn adversely affect future research efforts and the decisions of lawmakers, clinicians and public health leaders.”

David Allison points us to this article by Bryan McComb, Alexis Frazier-Wood, John Dawson, and himself, “Drawing conclusions from within-group comparisons and selected subsets of data leads to unsubstantiated conclusions.” It’s a letter to the editor for the Australian and New Zealand Journal of Public Health, and it begins: [In the paper, “School-based systems change for obesity prevention in adolescents: Outcomes of the Australian Capital Territory ‘It’s Your Move!’”] Malakellis et al. conducted an ambitious quasi-experimental evaluation of “multiple initiatives at [the] individual, community, and school policy level to support healthier nutrition and physical activity” among children.1 In the Abstract they concluded, “There was some evidence of effectiveness of the systems approach to preventing obesity among adolescents” and cited implications for public health as follows: “These findings demonstrate that the use of systems methods can be effective on a small…
Original Post: “However noble the goal, research findings should be reported accurately. Distortion of results often occurs not in the data presented but . . . in the abstract, discussion, secondary literature and press releases. Such distortion can lead to unsupported beliefs about what works for obesity treatment and prevention. Such unsupported beliefs may in turn adversely affect future research efforts and the decisions of lawmakers, clinicians and public health leaders.”

How is science like the military? They are politically extreme yet vital to the nation

I was thinking recently about two subcultures in the United States, public or quasi-public institutions that are central to our country’s power, and which politically and socially are distant both from each other and from much of the mainstream of American society. The two institutions I’m thinking of are science and the military, both of which America excels at. We spend the most on science and do the most science in the world, we’ve developed transistors and flying cars and Stan and all sorts of other technologies that derive from the advanced science that we teach and research in the world’s best universities. As for the military, we spend more than the next umpteen countries around, and our army, navy, and air force are dominant everywhere but on the streets of Mogadishu. Neither institution is perfect—you don’t need me to…
Original Post: How is science like the military? They are politically extreme yet vital to the nation

A reporter sent me a Jama paper and asked me what I thought . . .

A reporter sent me a Jama paper and asked me what I thought . . . Posted by Andrew on 10 December 2017, 9:32 am My reply: Thanks for sending. I can’t be sure about everything they’re doing but the paper looks reasonable to me. I expect there are various ways that the analysis could be improved, but on a quick look I don’t see anything obviously wrong with it, and the authors seem to know what they are doing. The findings seem important, and the results are mapped clearly enough that once the results are out there, others can comment if they see problems. The only thing is that I think it would be better if the authors just posted all their graphs online, including but not limited to the graphs in that published paper. I really don’t like the…
Original Post: A reporter sent me a Jama paper and asked me what I thought . . .

“How to Assess Internet Cures Without Falling for Dangerous Pseudoscience”

“How to Assess Internet Cures Without Falling for Dangerous Pseudoscience” Posted by Andrew on 8 December 2017, 9:55 am Science writer Julie Rehmeyer discusses her own story: Five years ago, against practically anyone’s better judgment, I knowingly abandoned any semblance of medical evidence to follow the bizarre-sounding health advice of strangers on the internet. The treatment was extreme, expensive, and potentially dangerous. If that sounds like a terrible idea to you, imagine how it must have felt to a science journalist like me, trained to value evidence above all. A decade ago, I never would have believed I’d do such a lunatic thing. But I was desperately, desperately ill. . . . So I took a deep dive into the murky world of untested treatments. The incredible thing is, I found something that brought astonishing improvements, even if not quite a…
Original Post: “How to Assess Internet Cures Without Falling for Dangerous Pseudoscience”

Orphan drugs and forking paths: I’d prefer a multilevel model but to be honest I’ve never fit such a model for this sort of problem

Amos Elberg writes: I’m writing to let you know about a drug trial you may find interesting from a statistical perspective. As you may know, the relatively recent “orphan drug” laws allow (basically) companies that can prove an off-patent drug treats an otherwise untreatable illness, to obtain intellectual property protection for otherwise generic or dead drugs. This has led to a new business of trying large numbers of combinations of otherwise-unused drugs against a large number of untreatable illnesses, with a large number of success criteria. Charcot-Marie-Tooth (CMT) is a moderately rare genetic degenerative peripheral nerve disease with no known treatment. CMT causes the Schwann cells, which surround the peripheral nerves, to weaken and eventually die, leading to demyelination of the nerves, a loss of nerve conduction velocity, and an eventual loss of nerve efficacy. PXT3003 is a drug currently…
Original Post: Orphan drugs and forking paths: I’d prefer a multilevel model but to be honest I’ve never fit such a model for this sort of problem

Using Mister P to get population estimates from respondent driven sampling

From one of our exams: A researcher at Columbia University’s School of Social Work wanted to estimate the prevalence of drug abuse problems among American Indians (Native Americans) living in New York City. From the Census, it was estimated that about 30,000 Indians live in the city, and the researcher had a budget to interview 400. She did not have a list of Indians in the city, and she obtained her sample as follows. She started with a list of 300 members of a local American Indian community organization, and took a random sample of 100 from this list. She interviewed these 100 persons and asked each of these to give her the names of other Indians in the city whom they knew. She asked each respondent to characterize him/herself and also the people on the list on a 1-10…
Original Post: Using Mister P to get population estimates from respondent driven sampling

It’s hard to know what to say about an observational comparison that doesn’t control for key differences between treatment and control groups, chili pepper edition

It’s hard to know what to say about an observational comparison that doesn’t control for key differences between treatment and control groups, chili pepper edition Posted by Andrew on 3 August 2017, 9:55 am Jonathan Falk points to this article and writes: Thoughts? I would have liked to have seen the data matched on age, rather than simply using age in a Cox regression, since I suspect that’s what really going on here. The non-chili eaters were much older, and I suspect that the failure to interact age, or at least specify the age effect more finely, has a gigantic impact here, especially since the raw inclusion of age raised the hazard ratio dramatically. Having controlled for Blood, Sugar, and Sex, the residual must be Magik. My reply: Yes, also they need to interact age x sex, and smoking is another…
Original Post: It’s hard to know what to say about an observational comparison that doesn’t control for key differences between treatment and control groups, chili pepper edition

“Explaining recent mortality trends among younger and middle-aged White Americans”

“Explaining recent mortality trends among younger and middle-aged White Americans” Posted by Andrew on 1 August 2017, 9:30 pm Kevin Lewis sends along this paper by Ryan Masters, Andrea Tilstra, and Daniel Simon, who write: Recent research has suggested that increases in mortality among middle-aged US Whites are being driven by suicides and poisonings from alcohol and drug use. Increases in these ‘despair’ deaths have been argued to reflect a cohort-based epidemic of pain and distress among middle-aged US Whites. We examine trends in all-cause and cause-specific mortality rates among younger and middle-aged US White men and women between 1980 and 2014, using official US mortality data. . . . Trends in middle-aged US White mortality vary considerably by cause and gender. The relative contribution to overall mortality rates from drug-related deaths has increased dramatically since the early 1990s, but the…
Original Post: “Explaining recent mortality trends among younger and middle-aged White Americans”