How to interpret “p = .06” in situations where you really really want the treatment to work?

We’ve spent a lot of time during the past few years discussing the difficulty of interpreting “p less than .05” results from noisy studies. Standard practice is to just take the point estimate and confidence interval, but this is in general wrong in that it overestimates effect size (type M error) and can get the direction wrong (type S error). So what about noisy studies where the p-value is more than .05, that is, where the confidence interval includes zero? Standard practice here is to just declare this as a null effect, but of course that’s not right either, as the estimate of 0 is surely a negatively biased estimate of the magnitude of the effect. When the confidence interval includes 0, we can typically say that the data are consistent with no effect. But that doesn’t mean the true…
Original Post: How to interpret “p = .06” in situations where you really really want the treatment to work?

Riddle me this

Riddle me this Posted by Andrew on 8 May 2017, 9:55 am [cat picture] Paul Alper writes: From Susan Perry’s article based on Paul Hacker’s BMJ article: https://www.minnpost.com/second-opinion/2017/04/investigative-report-uncovers-coca-colas-covert-attempts-influence-journalist In 2015, the University of Colorado had to shut down its nonprofit Global Energy Balance Network after the organization was exposed as being essentially a “scientific” front for its funder, Coca-Cola. The University of Colorado School of Medicine returned the $1 million that the beverage company had provided to start the organization. Marion Nestle, professor of nutrition and public health at New York University, told [Paul] Hacker that although the reporters at the obesity conferences were misled, they shouldn’t have been so gullible. They should have known, she said, that the industry was behind the events, given who was speaking at them. Hacker’s BMJ article is at BMJ 2017; 357 doi: https://doi.org/10.1136/bmj.j1638 (Published 05…
Original Post: Riddle me this

The statistical crisis in science: How is it relevant to clinical neuropsychology?

The statistical crisis in science: How is it relevant to clinical neuropsychology? Posted by Andrew on 3 May 2017, 9:52 am [cat picture] Hilde Geurts and I write: There is currently increased attention to the statistical (and replication) crisis in science. Biomedicine and social psychology have been at the heart of this crisis, but similar problems are evident in a wide range of fields. We discuss three examples of replication challenges from the field of social psychology and some proposed solutions, and then consider the applicability of these ideas to clinical neuropsychology. In addition to procedural developments such as preregistration and open data and criticism, we recommend that data be collected and analyzed with more recognition that each new study is a part of a learning process. The goal of improving neuropsychological assessment, care, and cure is too important to not…
Original Post: The statistical crisis in science: How is it relevant to clinical neuropsychology?

Fragility index is too fragile

Fragility index is too fragile Posted by Andrew on 3 January 2017, 9:53 am Simon Gates writes: Where is an issue that has had a lot of publicity and Twittering in the clinical trials world recently. Many people are promoting the use of the “fragility index” (paper attached) to help interpretation of “significant” results from clinical trials. The idea is that it gives a measure of how robust the results are – how many patients would have to have had a different outcome to render the result “non-significant”. Lots of well-known people seem to be recommending this at the moment; there’s a website too (http://fragilityindex.com/ , which calculates p-values to 15 decimal places!). I’m less enthusiastic. It’s good that problems of “statistical significance” are being more widely appreciated, but the fragility index is still all about “significance”, and we really need…
Original Post: Fragility index is too fragile

Migration explaining observed changes in mortality rate in different geographic areas?

We know that the much-discussed increase in mortality among middle-aged U.S. whites is mostly happening among women in the south. In response to some of that discussion, Tim Worstall wrote: I [Worstall] have a speculative answer. It is absolutely speculative: but it is also checkable to some extent. Really, I’m channelling my usual critique of Michael Marmot’s work on health inequality in the UK. Death stats don’t measure lifespans of people from places, they measure life spans of people who die in places. So, if there’s migration, and selectivity in who migrates where, then it’s not the inequality between places that might explain differential lifespans but that selection in migration. Similarly, here in the American case. We know that Appalachia, the Ozarks and the smaller towns of the mid west are emptying out. But it’s those who graduate high school,…
Original Post: Migration explaining observed changes in mortality rate in different geographic areas?

You’ll have to figure this one out for yourselves.

So. The other day this following email comes in, subject line “Grabbing headlines using poor statistical methods,” from Clifford Anderson-Bergman: Here’s another to file under “How to get mainstream publication by butchering your statistics”. The paper: Comparison of Hospital Mortality and Readmission Rates for Medicare Patients Treated by Male vs Female Physicians Journal: JAMA Featured in: NPR, Fox News, Washington Post, Business Insider (I’m sure more, these are just the first few that show up in my Google News feed) Estimated differences:Adjusted mortality: 11.07% vs 11.49%; adjusted risk difference, –0.43%; 95% CI, –0.57% to –0.28%; P < .001; number needed to treat to prevent 1 death, 233 Adjusted readmissions, 15.02% vs 15.57%; adjusted risk difference, –0.55%; 95% CI, –0.71% to –0.39%; P < .001; number needed to treat to prevent 1 readmission, 182 Statistical Folly: “We used a multivariable linear probability model (ie, fitting…
Original Post: You’ll have to figure this one out for yourselves.

“Calm Down. American Life Expectancy Isn’t Falling.”

Ben Hanowell writes: In the middle of December 2016 there were a lot of headlines about the drop in US life expectancy from 2014 to 2015. Most of these articles painted a grim picture of US population health. Many reporters wrote about a “trend” of decreasing life expectancy in America. The trouble is that the drop in US life expectancy last year was the smallest among six drops between 1960 and 2015. What’s more, life expectancy dropped in 2015 by only a little over a month. That’s half the size of the next smallest drop and two-thirds the size of the average among those six drops. Compare that to the standard deviation in year-over-year change in life expectancy, which is nearly three months. In terms of percent change, 2015 life expectancy dropped by 1.5%… but the standard deviation of year-over-year…
Original Post: “Calm Down. American Life Expectancy Isn’t Falling.”

So little information to evaluate effects of dietary choices

Paul Alper points to this excellent news article by Aaron Carroll, who tells us how little information is available in studies of diet and public health. Here’s Carroll: Just a few weeks ago, a study was published in the Journal of Nutrition that many reports in the news media said proved that honey was no better than sugar as a sweetener, and that high-fructose corn syrup was no worse. . . . Not so fast. A more careful reading of this research would note its methods. The study involved only 55 people, and they were followed for only two weeks on each of the three sweeteners. . . . The truth is that research like this is the norm, not the exception. . . . Readers often ask me how myths about nutrition get perpetuated and why it’s not possible…
Original Post: So little information to evaluate effects of dietary choices

Interesting epi paper using Stan

Jon Zelner writes: Just thought I’d send along this paper by Justin Lessler et al. Thought it was both clever & useful and a nice ad for using Stan for epidemiological work. Basically, what this paper is about is estimating the true prevalence and case fatality ratio of MERS-CoV [Middle East Respiratory Syndrome Coronavirus Infection] using data collected via a…
Original Post: Interesting epi paper using Stan

“Breakfast skipping, extreme commutes, and the sex composition at birth”

Bhash Mazumder sends along a paper (coauthored with Zachary Seeskin) which begins: A growing body of literature has shown that environmental exposures in the period around conception can affect the sex ratio at birth through selective attrition that favors the survival of female conceptuses. Glucose availability is considered a key indicator of the fetal environment, and its absence as a…
Original Post: “Breakfast skipping, extreme commutes, and the sex composition at birth”