Hey, here’s a new reason for a journal to reject a paper: it’s “annoying” that it’s already on a preprint server

Hey, here’s a new reason for a journal to reject a paper: it’s “annoying” that it’s already on a preprint server Posted by Andrew on 15 January 2018, 9:55 am Alex Gamma writes: I’m interested in publishing in journal X. So I inquire about X’s preprint policy. X’s editor informs me that [Journal X] does not prohibit placing submitted manuscripts on preprint servers. Some reviewers may notice the server version of the article, however, and they may find the lack of anonymity so annoying that it affects their recommendations about the paper. This is interesting in part because it highlights the different roles of scientific journals. Traditionally, a journal is a way to “publish” a paper, that is, to print the article so that other people can read it. In this case, it’s already on the preprint server, so the main…
Original Post: Hey, here’s a new reason for a journal to reject a paper: it’s “annoying” that it’s already on a preprint server

The puzzle: Why do scientists typically respond to legitimate scientific criticism in an angry, defensive, closed, non-scientific way? The answer: We’re trained to do this during the process of responding to peer review.

[image of Cantor’s corner] Here’s the “puzzle,” as we say in social science. Scientific research is all about discovery of the unexpected: to do research, you need to be open to new possibilities, to design experiments to force anomalies, and to learn from them. The sweet spot for any researcher is at Cantor’s corner. (See here for further explanation of the Cantor connection.) Buuuut . . . researchers are also notorious for being stubborn. In particular, here’s a pattern we see a lot:– Research team publishes surprising result A based on some “p less than .05” empirical results.– This publication gets positive attention and the researchers and others in their subfield follow up with open-ended “conceptual replications”: related studies that also attain the “p less than .05” threshold.– Given the surprising nature of result A, it’s unsurprising that other researchers…
Original Post: The puzzle: Why do scientists typically respond to legitimate scientific criticism in an angry, defensive, closed, non-scientific way? The answer: We’re trained to do this during the process of responding to peer review.

Why are these explanations so popular?

David Weakliem writes: According to exit polls, Donald Trump got 67% of the vote among whites without a college degree in 2016, which may be the best-ever performance by a Republican (Reagan got 66% of that group in 1984). Weakliem first rejects one possibility that’s been going around: One popular idea is that he cared about them, or at least gave them the impression that he cared. The popularity of this account has puzzled me, because it’s not even superficially plausible. Every other presidential candidate I can remember tried to show empathy by talking about people they had met on the campaign trail, or tough times they had encountered in their past, or how their parents taught them to treat everyone equally. Trump didn’t do any of that—he boasted about how smart and how rich he was. And, indeed, Weakliem…
Original Post: Why are these explanations so popular?

“However noble the goal, research findings should be reported accurately. Distortion of results often occurs not in the data presented but . . . in the abstract, discussion, secondary literature and press releases. Such distortion can lead to unsupported beliefs about what works for obesity treatment and prevention. Such unsupported beliefs may in turn adversely affect future research efforts and the decisions of lawmakers, clinicians and public health leaders.”

David Allison points us to this article by Bryan McComb, Alexis Frazier-Wood, John Dawson, and himself, “Drawing conclusions from within-group comparisons and selected subsets of data leads to unsubstantiated conclusions.” It’s a letter to the editor for the Australian and New Zealand Journal of Public Health, and it begins: [In the paper, “School-based systems change for obesity prevention in adolescents: Outcomes of the Australian Capital Territory ‘It’s Your Move!’”] Malakellis et al. conducted an ambitious quasi-experimental evaluation of “multiple initiatives at [the] individual, community, and school policy level to support healthier nutrition and physical activity” among children.1 In the Abstract they concluded, “There was some evidence of effectiveness of the systems approach to preventing obesity among adolescents” and cited implications for public health as follows: “These findings demonstrate that the use of systems methods can be effective on a small…
Original Post: “However noble the goal, research findings should be reported accurately. Distortion of results often occurs not in the data presented but . . . in the abstract, discussion, secondary literature and press releases. Such distortion can lead to unsupported beliefs about what works for obesity treatment and prevention. Such unsupported beliefs may in turn adversely affect future research efforts and the decisions of lawmakers, clinicians and public health leaders.”

How does probabilistic computation differ in physics and statistics?

[image of Schrodinger’s cat, of course] Stan collaborator Michael Betancourt wrote an article, “The Convergence of Markov chain Monte Carlo Methods: From the Metropolis method to Hamiltonian Monte Carlo,” discussing how various ideas of computational probability moved from physics to statistics. Three things I wanted to add to Betancourt’s story: 1. My paper with Rubin on R-hat, that measure of mixing for iterative simulation, came in part from my reading of old papers in the computational physics literature, in particular Fosdick (1959), which proposed a multiple-chain approach to monitoring convergence. What we added in our 1992 paper was the within-chain comparison: instead of simply comparing multiple chains to each other, we compared their variance to the within-chain variance. This enabled the diagnostic to be much more automatic. 2. Related to point 1 above: It’s my impression that computational physics is…
Original Post: How does probabilistic computation differ in physics and statistics?

How is science like the military? They are politically extreme yet vital to the nation

I was thinking recently about two subcultures in the United States, public or quasi-public institutions that are central to our country’s power, and which politically and socially are distant both from each other and from much of the mainstream of American society. The two institutions I’m thinking of are science and the military, both of which America excels at. We spend the most on science and do the most science in the world, we’ve developed transistors and flying cars and Stan and all sorts of other technologies that derive from the advanced science that we teach and research in the world’s best universities. As for the military, we spend more than the next umpteen countries around, and our army, navy, and air force are dominant everywhere but on the streets of Mogadishu. Neither institution is perfect—you don’t need me to…
Original Post: How is science like the military? They are politically extreme yet vital to the nation

Now, Andy did you hear about this one?

We drank a toast to innocence, we drank a toast to now. We tried to reach beyond the emptiness but neither one knew how. – Kiki and Herb Well I hope you all ended your 2017 with a bang.  Mine went out on a long-haul flight crying so hard at a French AIDS drama that the flight attendant delivering my meal had to ask if I was ok. (Gay culture is keeping a running ranking of French AIDS dramas, so I can tell you that this BPM was my second favourite.) And I hope you spent your New Year’s Day well. Mine went on jet lag and watching I, Tonya in the cinema. (Gay culture is a lot to do with Tonya Harding especially after Sufjan Stevens chased his songs in Call Me By Your Name with the same song…
Original Post: Now, Andy did you hear about this one?

Can’t keep up with the flood of gobbledygook

Can’t keep up with the flood of gobbledygook Posted by Andrew on 23 December 2017, 5:02 pm Jonathan Falk points me to a paper published in one of the tabloids; he had skepticism about its broad claims. I took a look at the paper, noticed a few goofy things about it (for example, “Our data also indicate a shift toward more complex societies over time in a manner that lends support to the idea of a driving force behind the evolution of increasing complexity”), and wrote back to him: “I’m too exhausted to even bother mocking this on the blog.” Falk replied: Well, you’d need about 30 co-bloggers to even up with the number of authors. Actually 53, but who’s counting? The funny thing is, it’s not like this paper is so horrible. It’s a million times better than the ovulation-and-voting…
Original Post: Can’t keep up with the flood of gobbledygook

“The Billy Beane of murder”?

“The Billy Beane of murder”? Posted by Andrew on 18 December 2017, 9:47 am John Hall points to this news article in Businessweek by by Robert Kolker, “Serial Killers Should Fear This Algorithm,” and writes: I couldn’t help but think that you should get some grad students working on the data set mentioned in the article below. Meanwhile this story got picked up by the New Yorker, although without any reference to the earlier Businessweek article. This seems weird to me—wouldn’t the New Yorker writer have done a google search on Thomas Hargrove (the subject of the article) and found this earlier story? In academic writing you’re always supposed to cite what came before and discuss how your work advances the field. I guess journalism is different: there it’s standard practice not to acknowledge related articles published elsewhere. Anyway, I’d forgotten…
Original Post: “The Billy Beane of murder”?

Stranger than fiction

Someone pointed me to a long discussion, which he preferred not to share publicly, of his perspective on a scientific controversy in his field of research. He characterized a particular claim as “impossible to be true, i.e., false, and therefore, by definition, fiction.” But my impression of a lot of research misconduct is that the researchers in question believe they are acting in the service of a larger truth, and when they misrepresent data or exaggerate conclusions, that they feel they’re just anticipating the findings that they already know are correct. This is inappropriate from a scientific perspective but it doesn’t quite feel like lying either. Again, having not read any of the details I am not saying that any aspects of this apply to this person’s particular story, I’m just speaking in general. It would be fair to characterize…
Original Post: Stranger than fiction