Mark Palko writes: The following needs to be an immutable law of journalism: when someone with no track record comes into a field claiming to be able to do a job many times better for a fraction of the cost, the burden of proof needs to shift quickly and decisively onto the one making the claim. The reporter simply has to assume the claim is false until substantial evidence is presented to the contrary. Yup. This is related to advice I give to young researchers giving presentations or writing research papers: 1. Describe the problem you have that existing methods can’t solve. 2. Show how your new method solves the problem. 3. Explain how your method works. 4. Explain why, if your idea is so great, how come all the people who came before you were not already doing it.…

Original Post: “The following needs to be an immutable law of journalism: when someone with no track record comes into a field claiming to be able to do a job many times better for a fraction of the cost, the burden of proof needs to shift quickly and decisively onto the one making the claim. The reporter simply has to assume the claim is false until substantial evidence is presented to the contrary.”

# Teaching

## Alzheimer’s Mouse research on the Orient Express

Paul Alper sends along an article from Joy Victory at Health News Review, shooting down a bunch of newspaper headlines (“Extra virgin olive oil staves off Alzheimer’s, preserves memory, new study shows” from USA Today, the only marginally better “Can extra-virgin olive oil preserve memory and prevent Alzheimer’s?” from the Atlanta Journal-Constitution, and the better but still misleading “Temple finds olive oil is good for the brain — in mice” from the Philadelphia Inquirer) which were based on a university’s misleading press release. That’s a story we’ve heard before. The clickbait also made its way into traditionally respected outlets Newsweek and Voice of America. And NPR, kinda. Here’s Joy Victory: It’s pretty great clickbait—a common, devastating disease cured by something many of us already have in our pantries! . . . To deconstruct how this went off the rails, let’s…

Original Post: Alzheimer’s Mouse research on the Orient Express

## Forking paths plus lack of theory = No reason to believe any of this.

[image of a cat with a fork] Kevin Lewis points us to this paper which begins: We use a regression discontinuity design to estimate the causal effect of election to political office on natural lifespan. In contrast to previous findings of shortened lifespan among US presidents and other heads of state, we find that US governors and other political office holders live over one year longer than losers of close elections. The positive effects of election appear in the mid-1800s, and grow notably larger when we restrict the sample to later years. We also analyze heterogeneity in exposure to stress, the proposed mechanism in the previous literature. We find no evidence of a role for stress in explaining differences in life expectancy. Those who win by large margins have shorter life expectancy than either close winners or losers, a fact…

Original Post: Forking paths plus lack of theory = No reason to believe any of this.

## Stupid-ass statisticians don’t know what a goddam confidence interval is

Stupid-ass statisticians don’t know what a goddam confidence interval is Posted by Andrew on 28 December 2017, 9:12 am From page 20 in a well-known applied statistics textbook: The hypothesis of whether a parameter is positive is directly assessed via its confidence interval. If both ends of the 95% confidence interval exceed zero, then we are at least 95% sure (under the assumptions of the model) that the parameter is positive. Huh? Who says this sort of thing? Only a complete fool. Or, to be charitable, maybe someone who didn’t carefully think through everything he was writing and let some sloppy thinking slip in. Just to explain in detail, the above quotation has two errors. First, under the usual assumptions of the classical model, you can’t make any probability statement about the parameter value; all you can do is make an…

Original Post: Stupid-ass statisticians don’t know what a goddam confidence interval is

## Interactive visualizations of sampling and GP regression

Interactive visualizations of sampling and GP regression You really don’t want to miss Chi Feng‘s absolutely wonderful interactive demos. (1) Markov chain Monte Carlo sampling I believe this is exactly what Andrew was asking for a few Stan meetings ago: This tool lets you explore a range of sampling algorithms including random-walk Metropolis, Hamiltonian Monte Carlo, and NUTS operating over a range of two-dimensional distributions (standard normal, banana, donut, multimodal, and one squiggly one). You can control both the settings of the algorithms and the settings of the visualizations. As you run it, it even collects the draws into a sample which it summarizes as marginal histograms. Source code The demo is implemented in Javascript with the source code on Chi Feng’s GitHub organization: Wish list 3D (glasses or virtual reality headset) multiple chains in parallel scatterplot breadcrumbs Gibbs sampler…

Original Post: Interactive visualizations of sampling and GP regression

## Looking for data on speed and traffic accidents—and other examples of data that can be fit by nonlinear models

Looking for data on speed and traffic accidents—and other examples of data that can be fit by nonlinear models Posted by Andrew on 2 November 2017, 9:17 am [cat picture] For the chapter in Regression and Other Stories that includes nonlinear regression, I’d like a couple homework problems where the kids have to construct and fit models to real data. So I need some examples. We already have the success of golf putts as a function of distance from the hole, and I’d like some others. One thing that came to mind today, because I happened to see a safety warning poster on the bus reminding people not to drive too fast, is data on speed and traffic accidents. But I’m interested in other examples too. Just about anything interesting with data on x and y where there’s no simple linear…

Original Post: Looking for data on speed and traffic accidents—and other examples of data that can be fit by nonlinear models

## Advice for science writers!

I spoke today at a meeting of science journalists, in a session organized by Betsy Mason, also featuring Kristin Sainani, Christie Aschwanden, and Tom Siegfried. My talk was on statistical paradoxes of science and science journalism, and I mentioned the Ted Talk paradox, Who watches the watchmen, the Eureka bias, the “What does not kill my statistical significance makes it stronger” fallacy, the unbiasedness fallacy, selection bias in what gets reported, the Australia hypothesis, and how we can do better. Sainani gave some examples illustrating that journalists with no particular statistical or subject-matter expertise should be able to see through some of the claims made in published papers, where scientists misinterpret their own data or go far beyond what was implied by their data. Aschwanden and Siegfried talked about the confusions surrounding p-values and recommended that reporters pretty much forget…

Original Post: Advice for science writers!

## My favorite definition of statistical significance

My favorite definition of statistical significance Posted by Andrew on 28 October 2017, 1:08 pm From my 2009 paper with Weakliem: Throughout, we use the term statistically significant in the conventional way, to mean that an estimate is at least two standard errors away from some “null hypothesis” or prespecified value that would indicate no effect present. An estimate is statistically insignificant if the observed value could reasonably be explained by simple chance variation, much in the way that a sequence of 20 coin tosses might happen to come up 8 heads and 12 tails; we would say that this result is not statistically significantly different from chance. More precisely, the observed proportion of heads is 40 percent but with a standard error of 11 percent—thus, the data are less than two standard errors away from the null hypothesis of 50…

Original Post: My favorite definition of statistical significance

## The Real World Interactive Learning Tutorial

The Real World Interactive Learning Tutorial Alekh and I have been polishin the Real World Interactive Learning tutorial for ICML 2017 on Sunday. This tutorial should be of pretty wide interest. For data scientists, we are crossing a threshold into easy use of interactive learning while for researchers interactive learning is plausibly the most important frontier of understanding. Great progress on both the theory and especially on practical systems has been made since an earlier NIPS 2013 tutorial. Please join us if you are interested

Original Post: The Real World Interactive Learning Tutorial

## It’s hard to know what to say about an observational comparison that doesn’t control for key differences between treatment and control groups, chili pepper edition

It’s hard to know what to say about an observational comparison that doesn’t control for key differences between treatment and control groups, chili pepper edition Posted by Andrew on 3 August 2017, 9:55 am Jonathan Falk points to this article and writes: Thoughts? I would have liked to have seen the data matched on age, rather than simply using age in a Cox regression, since I suspect that’s what really going on here. The non-chili eaters were much older, and I suspect that the failure to interact age, or at least specify the age effect more finely, has a gigantic impact here, especially since the raw inclusion of age raised the hazard ratio dramatically. Having controlled for Blood, Sugar, and Sex, the residual must be Magik. My reply: Yes, also they need to interact age x sex, and smoking is another…

Original Post: It’s hard to know what to say about an observational comparison that doesn’t control for key differences between treatment and control groups, chili pepper edition