Using Mister P to get population estimates from respondent driven sampling

From one of our exams: A researcher at Columbia University’s School of Social Work wanted to estimate the prevalence of drug abuse problems among American Indians (Native Americans) living in New York City. From the Census, it was estimated that about 30,000 Indians live in the city, and the researcher had a budget to interview 400. She did not have a list of Indians in the city, and she obtained her sample as follows. She started with a list of 300 members of a local American Indian community organization, and took a random sample of 100 from this list. She interviewed these 100 persons and asked each of these to give her the names of other Indians in the city whom they knew. She asked each respondent to characterize him/herself and also the people on the list on a 1-10…
Original Post: Using Mister P to get population estimates from respondent driven sampling

It’s hard to know what to say about an observational comparison that doesn’t control for key differences between treatment and control groups, chili pepper edition

It’s hard to know what to say about an observational comparison that doesn’t control for key differences between treatment and control groups, chili pepper edition Posted by Andrew on 3 August 2017, 9:55 am Jonathan Falk points to this article and writes: Thoughts? I would have liked to have seen the data matched on age, rather than simply using age in a Cox regression, since I suspect that’s what really going on here. The non-chili eaters were much older, and I suspect that the failure to interact age, or at least specify the age effect more finely, has a gigantic impact here, especially since the raw inclusion of age raised the hazard ratio dramatically. Having controlled for Blood, Sugar, and Sex, the residual must be Magik. My reply: Yes, also they need to interact age x sex, and smoking is another…
Original Post: It’s hard to know what to say about an observational comparison that doesn’t control for key differences between treatment and control groups, chili pepper edition

“Explaining recent mortality trends among younger and middle-aged White Americans”

“Explaining recent mortality trends among younger and middle-aged White Americans” Posted by Andrew on 1 August 2017, 9:30 pm Kevin Lewis sends along this paper by Ryan Masters, Andrea Tilstra, and Daniel Simon, who write: Recent research has suggested that increases in mortality among middle-aged US Whites are being driven by suicides and poisonings from alcohol and drug use. Increases in these ‘despair’ deaths have been argued to reflect a cohort-based epidemic of pain and distress among middle-aged US Whites. We examine trends in all-cause and cause-specific mortality rates among younger and middle-aged US White men and women between 1980 and 2014, using official US mortality data. . . . Trends in middle-aged US White mortality vary considerably by cause and gender. The relative contribution to overall mortality rates from drug-related deaths has increased dramatically since the early 1990s, but the…
Original Post: “Explaining recent mortality trends among younger and middle-aged White Americans”

How to interpret “p = .06” in situations where you really really want the treatment to work?

We’ve spent a lot of time during the past few years discussing the difficulty of interpreting “p less than .05” results from noisy studies. Standard practice is to just take the point estimate and confidence interval, but this is in general wrong in that it overestimates effect size (type M error) and can get the direction wrong (type S error). So what about noisy studies where the p-value is more than .05, that is, where the confidence interval includes zero? Standard practice here is to just declare this as a null effect, but of course that’s not right either, as the estimate of 0 is surely a negatively biased estimate of the magnitude of the effect. When the confidence interval includes 0, we can typically say that the data are consistent with no effect. But that doesn’t mean the true…
Original Post: How to interpret “p = .06” in situations where you really really want the treatment to work?

Riddle me this

Riddle me this Posted by Andrew on 8 May 2017, 9:55 am [cat picture] Paul Alper writes: From Susan Perry’s article based on Paul Hacker’s BMJ article: https://www.minnpost.com/second-opinion/2017/04/investigative-report-uncovers-coca-colas-covert-attempts-influence-journalist In 2015, the University of Colorado had to shut down its nonprofit Global Energy Balance Network after the organization was exposed as being essentially a “scientific” front for its funder, Coca-Cola. The University of Colorado School of Medicine returned the $1 million that the beverage company had provided to start the organization. Marion Nestle, professor of nutrition and public health at New York University, told [Paul] Hacker that although the reporters at the obesity conferences were misled, they shouldn’t have been so gullible. They should have known, she said, that the industry was behind the events, given who was speaking at them. Hacker’s BMJ article is at BMJ 2017; 357 doi: https://doi.org/10.1136/bmj.j1638 (Published 05…
Original Post: Riddle me this

The statistical crisis in science: How is it relevant to clinical neuropsychology?

The statistical crisis in science: How is it relevant to clinical neuropsychology? Posted by Andrew on 3 May 2017, 9:52 am [cat picture] Hilde Geurts and I write: There is currently increased attention to the statistical (and replication) crisis in science. Biomedicine and social psychology have been at the heart of this crisis, but similar problems are evident in a wide range of fields. We discuss three examples of replication challenges from the field of social psychology and some proposed solutions, and then consider the applicability of these ideas to clinical neuropsychology. In addition to procedural developments such as preregistration and open data and criticism, we recommend that data be collected and analyzed with more recognition that each new study is a part of a learning process. The goal of improving neuropsychological assessment, care, and cure is too important to not…
Original Post: The statistical crisis in science: How is it relevant to clinical neuropsychology?

Fragility index is too fragile

Fragility index is too fragile Posted by Andrew on 3 January 2017, 9:53 am Simon Gates writes: Where is an issue that has had a lot of publicity and Twittering in the clinical trials world recently. Many people are promoting the use of the “fragility index” (paper attached) to help interpretation of “significant” results from clinical trials. The idea is that it gives a measure of how robust the results are – how many patients would have to have had a different outcome to render the result “non-significant”. Lots of well-known people seem to be recommending this at the moment; there’s a website too (http://fragilityindex.com/ , which calculates p-values to 15 decimal places!). I’m less enthusiastic. It’s good that problems of “statistical significance” are being more widely appreciated, but the fragility index is still all about “significance”, and we really need…
Original Post: Fragility index is too fragile

Migration explaining observed changes in mortality rate in different geographic areas?

We know that the much-discussed increase in mortality among middle-aged U.S. whites is mostly happening among women in the south. In response to some of that discussion, Tim Worstall wrote: I [Worstall] have a speculative answer. It is absolutely speculative: but it is also checkable to some extent. Really, I’m channelling my usual critique of Michael Marmot’s work on health inequality in the UK. Death stats don’t measure lifespans of people from places, they measure life spans of people who die in places. So, if there’s migration, and selectivity in who migrates where, then it’s not the inequality between places that might explain differential lifespans but that selection in migration. Similarly, here in the American case. We know that Appalachia, the Ozarks and the smaller towns of the mid west are emptying out. But it’s those who graduate high school,…
Original Post: Migration explaining observed changes in mortality rate in different geographic areas?

You’ll have to figure this one out for yourselves.

So. The other day this following email comes in, subject line “Grabbing headlines using poor statistical methods,” from Clifford Anderson-Bergman: Here’s another to file under “How to get mainstream publication by butchering your statistics”. The paper: Comparison of Hospital Mortality and Readmission Rates for Medicare Patients Treated by Male vs Female Physicians Journal: JAMA Featured in: NPR, Fox News, Washington Post, Business Insider (I’m sure more, these are just the first few that show up in my Google News feed) Estimated differences:Adjusted mortality: 11.07% vs 11.49%; adjusted risk difference, –0.43%; 95% CI, –0.57% to –0.28%; P < .001; number needed to treat to prevent 1 death, 233 Adjusted readmissions, 15.02% vs 15.57%; adjusted risk difference, –0.55%; 95% CI, –0.71% to –0.39%; P < .001; number needed to treat to prevent 1 readmission, 182 Statistical Folly: “We used a multivariable linear probability model (ie, fitting…
Original Post: You’ll have to figure this one out for yourselves.

“Calm Down. American Life Expectancy Isn’t Falling.”

Ben Hanowell writes: In the middle of December 2016 there were a lot of headlines about the drop in US life expectancy from 2014 to 2015. Most of these articles painted a grim picture of US population health. Many reporters wrote about a “trend” of decreasing life expectancy in America. The trouble is that the drop in US life expectancy last year was the smallest among six drops between 1960 and 2015. What’s more, life expectancy dropped in 2015 by only a little over a month. That’s half the size of the next smallest drop and two-thirds the size of the average among those six drops. Compare that to the standard deviation in year-over-year change in life expectancy, which is nearly three months. In terms of percent change, 2015 life expectancy dropped by 1.5%… but the standard deviation of year-over-year…
Original Post: “Calm Down. American Life Expectancy Isn’t Falling.”