Giving feedback indirectly by invoking a hypothetical reviewer

Giving feedback indirectly by invoking a hypothetical reviewer Posted by Andrew on 2 August 2017, 9:44 am Ethan Bolker points us to this discussion on “How can I avoid being “the negative one” when giving feedback on statistics?”, which begins: Results get sent around a group of biological collaborators for feedback. Comments come back from the senior members of the group about the implications of the results, possible extensions, etc. I look at the results and I tend not to be as good at the “big picture” stuff (I’m a relatively junior member of the team), but I’m reasonably good with statistics (and that’s my main role), so I look at the details. Sometimes I think to myself “I don’t think those conclusions are remotely justified by the data”. How can I give honest feedback in a way that doesn’t come…
Original Post: Giving feedback indirectly by invoking a hypothetical reviewer

“Explaining recent mortality trends among younger and middle-aged White Americans”

“Explaining recent mortality trends among younger and middle-aged White Americans” Posted by Andrew on 1 August 2017, 9:30 pm Kevin Lewis sends along this paper by Ryan Masters, Andrea Tilstra, and Daniel Simon, who write: Recent research has suggested that increases in mortality among middle-aged US Whites are being driven by suicides and poisonings from alcohol and drug use. Increases in these ‘despair’ deaths have been argued to reflect a cohort-based epidemic of pain and distress among middle-aged US Whites. We examine trends in all-cause and cause-specific mortality rates among younger and middle-aged US White men and women between 1980 and 2014, using official US mortality data. . . . Trends in middle-aged US White mortality vary considerably by cause and gender. The relative contribution to overall mortality rates from drug-related deaths has increased dramatically since the early 1990s, but the…
Original Post: “Explaining recent mortality trends among younger and middle-aged White Americans”

Letter to the Editor of Perspectives on Psychological Science

[relevant cat picture] tl;dr: Himmicane in a teacup. Back in the day, the New Yorker magazine did not have a Letters to the Editors column, and so the great Spy magazine (the Gawker of its time) ran its own feature, Letters to the Editor of the New Yorker, where they posted the letters you otherwise would never see. Here on this blog we can start a new feature, Letters to the Editor of Perspectives on Psychological Science, which will feature corrections that this journal refuses to print. Here’s our first entry: “In the article, ‘Going in Many Right Directions, All at Once,’ published in this journal, the author wrote, “some critics go beyond scientific argument and counterargument to imply that the entire field is inept and misguided (e.g., Gelman, 2014; Shimmack [sic], 2014).’ However, this article provided no evidence that…
Original Post: Letter to the Editor of Perspectives on Psychological Science

It’s not “lying” exactly . . . What do you call it when someone deliberately refuses to correct an untruth?

New York Times columnist Bret Stephens tells the story. First the background: On Thursday I interviewed Central Intelligence Agency Director Mike Pompeo on a public stage . . . There was one sour moment. Midway through the interview, Pompeo abruptly slammed The New York Times for publishing the name last month of a senior covert C.I.A. officer, calling the disclosure “unconscionable.” The line was met with audience applause. I said, “You’re talking about Phil Agee,” and then repeated the name. . . . My startled rejoinder was not a reference to the covert C.I.A. officer unmasked by The Times, but rather a fumbled attempt to refer to the law governing such disclosures. Philip Agee, as Pompeo and everyone in the audience knew, was the infamous C.I.A. officer who went rogue in the 1970s, wrote a tell-all memoir, and publicly identified…
Original Post: It’s not “lying” exactly . . . What do you call it when someone deliberately refuses to correct an untruth?

Iceland education gene trend kangaroo

Someone who works in genetics writes: You may have seen the recent study in PNAS about genetic prediction of educational attainment in Iceland. the authors report in a very concerned fashion that every generation the attainment of education as predicted from genetics decreases by 0.1 standard deviations. This sounds bad. But consider that the University of Iceland was founded in 1911, right at the beginning of the period 1910-1990 studied by the authors (!). So there is a many-thousand-percent increase in actual educational attainment at the same time as there is an ominous 0.005 SD/year decrease in ‘genetic’ educational attainment. Over this period educational attainment in the developed world seems to have exploded beyond a reasonable doubt, as is shown in the paper’s appendix also for Iceland. This genetic effect seems like a kangaroo feather to me. My reply: I’m…
Original Post: Iceland education gene trend kangaroo

How to think scientifically about scientists’ proposals for fixing science

I wrote this article for a sociology journal: Science is in crisis. Any doubt about this status has surely been been dispelled by the loud assurances to the contrary by various authority figures who are deeply invested in the current system and have written things such as, “Psychology is not in crisis, contrary to popular rumor . . . Crisis or no crisis, the field develops consensus about the most valuable insights . . . National panels will convene and caution scientists, reviewers, and editors to uphold standards.” (Fiske, Schacter, and Taylor, 2016). When leaders go to that much trouble to insist there is no problem, it’s only natural for outsiders to worry . . . When I say that the replication crisis is also an opportunity, this is more than a fortune-cookie cliche; it is also a recognition that…
Original Post: How to think scientifically about scientists’ proposals for fixing science

My review of Duncan Watts’s book, “Everything is Obvious (once you know the answer)”

My review of Duncan Watts’s book, “Everything is Obvious (once you know the answer)” Posted by Andrew on 18 May 2017, 3:37 pm We had some recent discussion of this book in the comments and so I thought I’d point you to my review from a few years ago. Lots to chew on in the book, and in the review.
Original Post: My review of Duncan Watts’s book, “Everything is Obvious (once you know the answer)”

“P-hacking” and the intention-to-cheat effect

“P-hacking” and the intention-to-cheat effect Posted by Andrew on 10 May 2017, 5:53 pm I’m a big fan of the work of Uri Simonsohn and his collaborators, but I don’t like the term “p-hacking” because it can be taken to imply an intention to cheat. The image of p-hacking is of a researcher trying test after test on the data until reaching the magic “p less than .05.” But, as Eric Loken and I discuss in our paper on the garden of forking paths, multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time. I worry that the widespread use term “p-hacking” gives two wrong impressions: First, it implies that the many researchers who use p-values incorrectly are cheating or “hacking,” even though I suspect they’re mostly…
Original Post: “P-hacking” and the intention-to-cheat effect

“Everybody Lies” by Seth Stephens-Davidowitz

Seth Stephens-Davidowitz sent me his new book on learning from data. As is just about always the case for this sort of book, I’m a natural reviewer but I’m not really the intended audience. That’s why I gave Dan Ariely’s book to Juli Simon Thomas to review; I thought her perspective would be more relevant than mine for the potential reader. I took the new book by Stephens-Davidowitz and passed it along to someone else, a demanding reader who I thought might like it, and he did: he kept coming to me with new thought-provoking bits that he’d found in it. So that’s a pretty solid endorsement. I couldn’t convince him to write a review so you’ll have to take my word that he liked it. The thing I found most appealing about the book was that, in addition to…
Original Post: “Everybody Lies” by Seth Stephens-Davidowitz

Honesty and transparency are not enough

[cat picture] From a recent article, Honesty and transparency are not enough: This point . . . is important for two reasons. First, consider the practical consequences for a researcher who eagerly accepts the message of ethical and practical values of sharing and openness, but does not learn about the importance of data quality. He or she could then just be driving very carefully and very efficiently into a brick wall, conducting transparent experiment after transparent experiment and continuing to produce and publish noise. The openness of the work may make it easier for later researcher to attempt—and fail—to replicate the resulting published claims, but little if any useful empirical science will be done by anyone concerned. I don’t think we’re doing anybody any favors by having them work more openly using data that are inadequate to the task. The…
Original Post: Honesty and transparency are not enough