“The following needs to be an immutable law of journalism: when someone with no track record comes into a field claiming to be able to do a job many times better for a fraction of the cost, the burden of proof needs to shift quickly and decisively onto the one making the claim. The reporter simply has to assume the claim is false until substantial evidence is presented to the contrary.”

Mark Palko writes: The following needs to be an immutable law of journalism: when someone with no track record comes into a field claiming to be able to do a job many times better for a fraction of the cost, the burden of proof needs to shift quickly and decisively onto the one making the claim. The reporter simply has to assume the claim is false until substantial evidence is presented to the contrary. Yup. This is related to advice I give to young researchers giving presentations or writing research papers: 1. Describe the problem you have that existing methods can’t solve. 2. Show how your new method solves the problem. 3. Explain how your method works. 4. Explain why, if your idea is so great, how come all the people who came before you were not already doing it.…
Original Post: “The following needs to be an immutable law of journalism: when someone with no track record comes into a field claiming to be able to do a job many times better for a fraction of the cost, the burden of proof needs to shift quickly and decisively onto the one making the claim. The reporter simply has to assume the claim is false until substantial evidence is presented to the contrary.”

StanCon 2018 Helsinki, 29-31 August 2018

StanCon 2018 Helsinki, 29-31 August 2018 Photo (c) Visit Helsinki / Jussi Hellsten StanCon 2018 Asilomar was so much fun that we are organizing StanCon 2018 Helsinki August 29-31, 2018 at Aalto University, Helsinki, Finland (location chosen using antithetic sampling). Full information is available at StanCon 2018 Helsinki website Summary of the information What: One day of tutorials and two days of talks, open discussions, and statistical modeling in beautiful Helsinki. When: August 29-31, 2018 Where: Aalto University, Helsinki, Finland Invited speakers Richard McElreath, Max Planck Institute for Evolutionary Anthropology Maggie Lieu, European Space Astronomy Centre Sarah Heaps, Newcastle University Daniel Simpson, University of Toronto Call for contributed talks StanCon’s version of conference proceedings is a collection of contributed talks based on interactive, self-contained notebooks (e.g., knitr, R Markdown, Jupyter, etc.). For example, you might demonstrate a novel modeling technique,…
Original Post: StanCon 2018 Helsinki, 29-31 August 2018

Static sensitivity analysis: Computing robustness of Bayesian inferences to the choice of hyperparameters

Ryan Giordano wrote: Last year at StanCon we talked about how you can differentiate under the integral to automatically calculate quantitative hyperparameter robustness for Bayesian posteriors. Since then, I’ve packaged the idea up into an R library that plays nice with Stan. You can install it from this github repo. I’m sure you’ll be pretty busy at StanCon, but I’ll be there presenting a poster about exactly this work, and if you have a moment to chat I’d be very interested to hear what you think! I’ve started applying this package to some of the Stan examples, and it’s already uncovered some (in my opinion) serious problems, like this one from chapter 13.5 of the ARM book. It’s easy to accidentally make a non-robust model, and I think a tool like this could be very useful to Stan users! As…
Original Post: Static sensitivity analysis: Computing robustness of Bayesian inferences to the choice of hyperparameters

Statistical behavior at the end of the world: the effect of the publication crisis on U.S. research productivity

Under the heading, “I’m suspicious,” Kevin Lewis points us to this article with abstract: We exploit the timing of the Cuban Missile Crisis and the geographical variation in mortality risks individuals faced across states to analyse reproduction decisions during the crisis. The results of a difference-in-differences approach show evidence that fertility decreased in states that are farther from Cuba and increased in states with more military installations. Our findings suggest that individuals are more likely to engage in reproductive activities when facing high mortality risks, but reduce fertility when facing a high probability of enduring the aftermath of a catastrophe. It’s the usual story: forking paths (nothing in the main effect, followed by a selection among the many many possible two-way and three-way interactions that could be studied), followed by convoluted storytelling (“individuals indulge in reproductive activities when facing high…
Original Post: Statistical behavior at the end of the world: the effect of the publication crisis on U.S. research productivity

Hey, here’s a new reason for a journal to reject a paper: it’s “annoying” that it’s already on a preprint server

Hey, here’s a new reason for a journal to reject a paper: it’s “annoying” that it’s already on a preprint server Posted by Andrew on 15 January 2018, 9:55 am Alex Gamma writes: I’m interested in publishing in journal X. So I inquire about X’s preprint policy. X’s editor informs me that [Journal X] does not prohibit placing submitted manuscripts on preprint servers. Some reviewers may notice the server version of the article, however, and they may find the lack of anonymity so annoying that it affects their recommendations about the paper. This is interesting in part because it highlights the different roles of scientific journals. Traditionally, a journal is a way to “publish” a paper, that is, to print the article so that other people can read it. In this case, it’s already on the preprint server, so the main…
Original Post: Hey, here’s a new reason for a journal to reject a paper: it’s “annoying” that it’s already on a preprint server

Dear Thomas Frank

Dear Thomas Frank Posted by Andrew on 14 January 2018, 5:13 pm It’s a funny thing: academics are all easily reachable by email, but non-academics can be harder to track down. Someone pointed me today to a newspaper article by political analyst Thomas Frank that briefly mentioned my work. I had a question for Frank, but the only correspondence I had with him was from ten years ago, and my email bounced. So I’ll send it here: Dear Thomas: Someone pointed out a newspaper article in which you linked to something I’d written. Here’s what you wrote: Krugman said that the shift of working-class people to the Republican party was a myth and that it was not happening outside the south. . . . Here are some examples: a blog post from 2007; a column in the Times in 2008 (“Nor…
Original Post: Dear Thomas Frank

The puzzle: Why do scientists typically respond to legitimate scientific criticism in an angry, defensive, closed, non-scientific way? The answer: We’re trained to do this during the process of responding to peer review.

[image of Cantor’s corner] Here’s the “puzzle,” as we say in social science. Scientific research is all about discovery of the unexpected: to do research, you need to be open to new possibilities, to design experiments to force anomalies, and to learn from them. The sweet spot for any researcher is at Cantor’s corner. (See here for further explanation of the Cantor connection.) Buuuut . . . researchers are also notorious for being stubborn. In particular, here’s a pattern we see a lot:– Research team publishes surprising result A based on some “p less than .05” empirical results.– This publication gets positive attention and the researchers and others in their subfield follow up with open-ended “conceptual replications”: related studies that also attain the “p less than .05” threshold.– Given the surprising nature of result A, it’s unsurprising that other researchers…
Original Post: The puzzle: Why do scientists typically respond to legitimate scientific criticism in an angry, defensive, closed, non-scientific way? The answer: We’re trained to do this during the process of responding to peer review.

The retraction paradox: Once you retract, you implicitly have to defend all the many things you haven’t yet retracted

Mark Palko points to this news article by Beth Skwarecki on Goop, “the Gwyneth Paltrow pseudoscience empire.” Here’s Skwarecki: When Goop publishes something weird or, worse, harmful, I often find myself wondering what are they thinking? Recently, on Jimmy Kimmel, Gwyneth laughed at some of the newsletter’s weirder recommendations and said “I don’t know what the fuck we talk about.” . . . I [Skwarecki] . . . end up speaking with editorial director Nandita Khanna. “You publish a lot of things that are outside of the mainstream. What are your criteria for determining that something is safe and ethical to recommend?” Khanna starts by pointing out that they include a disclaimer at the bottom of health articles. This is true. It reads: The views expressed in this article intend to highlight alternative studies and induce conversation. They are the…
Original Post: The retraction paradox: Once you retract, you implicitly have to defend all the many things you haven’t yet retracted

Why are these explanations so popular?

David Weakliem writes: According to exit polls, Donald Trump got 67% of the vote among whites without a college degree in 2016, which may be the best-ever performance by a Republican (Reagan got 66% of that group in 1984). Weakliem first rejects one possibility that’s been going around: One popular idea is that he cared about them, or at least gave them the impression that he cared. The popularity of this account has puzzled me, because it’s not even superficially plausible. Every other presidential candidate I can remember tried to show empathy by talking about people they had met on the campaign trail, or tough times they had encountered in their past, or how their parents taught them to treat everyone equally. Trump didn’t do any of that—he boasted about how smart and how rich he was. And, indeed, Weakliem…
Original Post: Why are these explanations so popular?

A Python program for multivariate missing-data imputation that works on large datasets!?

Alex Stenlake and Ranjit Lall write about a program they wrote for imputing missing data: Strategies for analyzing missing data have become increasingly sophisticated in recent years, most notably with the growing popularity of the best-practice technique of multiple imputation. However, existing algorithms for implementing multiple imputation suffer from limited computational efficiency, scalability, and capacity to exploit complex interactions among large numbers of variables. These shortcomings render them poorly suited to the emerging era of “Big Data” in the social and natural sciences. Drawing on new advances in machine learning, we have developed an easy-to-use Python program – MIDAS (Multiple Imputation with Denoising Autoencoders) – that leverages principles of Bayesian nonparametrics to deliver a fast, scalable, and high-performance implementation of multiple imputation. MIDAS employs a class of unsupervised neural networks known as denoising autoencoders, which are capable of producing complex,…
Original Post: A Python program for multivariate missing-data imputation that works on large datasets!?