A small, underpowered treasure trove?

A small, underpowered treasure trove? Posted by Andrew on 12 January 2017, 9:48 am Benjamin Kirkup writes: As you sometimes comment on such things; I’m forwarding you a journal editorial (in a society journal)that presents “lessons learned” from an associated research study. What caught my attention was the comment on the “notorious” design, the lack of “significant” results, and the “interesting data on nonsignificant associations.” Apparently, the work “does not serve to inform the regulatory decision-making process with respect to antimicrobial compounds” but is “still valuable and can be informative.” Given the commissioning of a lessons-learned, how do you think the scientific publishing community should handle manuscripts presenting work with problematic designs and naturally uninformative outcomes? The editorial in question is called Lessons Learned from Probing for Impacts of Triclosan and Triclocarban on Human Microbiomes, it is by Rolf Halden, and…
Original Post: A small, underpowered treasure trove?

When do stories work, Process tracing, and Connections between qualitative and quantitative research

When do stories work, Process tracing, and Connections between qualitative and quantitative research Posted by Andrew on 11 January 2017, 9:45 am Jonathan Stray writes: I read your “when do stories work” paper (with Thomas Basbøll) with interest—as a journalist stories are of course central to my field. I wondered if you had encountered the “process tracing” literature in political science? It attempts to make sense of stories as “case studies” and there’s a nice logic of selection and falsification that has grown up around this. This article by David Collier is a good overview of process tracing, with a neat typology of story-based theory tests. Besides being a good paper generally, section 6 of this paper by James Mahoney and Gary Goertz discusses why you want non-random case/story selection in certain types of qualitative research. This paper by Jack Levy…
Original Post: When do stories work, Process tracing, and Connections between qualitative and quantitative research

We fiddle while Rome burns: p-value edition

Raghu Parthasarathy presents a wonderfully clear example of disastrous p-value-based reasoning that he saw in a conference presentation. Here’s Raghu: Consider, for example, some tumorous cells that we can treat with drugs 1 and 2, either alone or in combination. We can make measurements of growth under our various drug treatment conditions. Suppose our measurements give us the following graph: . . . from which we tell the following story: When administered on their own, drugs 1 and 2 are ineffective — tumor growth isn’t statistically different than the control cells (p > 0.05, 2 sample t-test). However, when the drugs are administered together, they clearly affect the cancer (p < 0.05); in fact, the p-value is very small (0.002!). This indicates a clear synergy between the two drugs: together they have a much stronger effect than each alone does.…
Original Post: We fiddle while Rome burns: p-value edition

“Which curve fitting model should I use?”

Oswaldo Melo writes: I have learned many of curve fitting models in the past, including their technical and mathematical details. Now I have been working on real-world problems and I face a great shortcoming: which method to use. As an example, I have to predict the demand of a product. I have a time series collected over the last 8 years. A simple set of (x,y) data about the relationship between the demand of a product on a certain week. I have this for 9 products. And to continue the study, I must predict the demand of each product for the next years. Looks easy enough, right? Since I do not have the probability distribution of the data, just use a non-parametric curve fitting algorithm. But which one? Kernel smoothing? B-splines? Wavelets? Symbolic regression? What about Fourier analysis? Neural networks?…
Original Post: “Which curve fitting model should I use?”

When you add a predictor the model changes so it makes sense that the coefficients change too.

When you add a predictor the model changes so it makes sense that the coefficients change too. Posted by Andrew on 4 January 2017, 9:54 am Shane Littrell writes: I’ve recently graduated with my Masters in Science in Research Psych but I’m currently trying to get better at my stats knowledge (in psychology, we tend to learn a dumbed down, “Stats for Dummies” version of things). I’ve been reading about “suppressor effects” in regression recently and it got me curious about some curious results from my thesis data. I ran a multiple regression analysis on several predictors of academic procrastination and I noticed that two of my predictors showed some odd behavior (to me). One of them (“entitlement”) was very nonsignificant (β = -.05, p = .339) until I added “boredom” as a predictor, and it changed to (β = –…
Original Post: When you add a predictor the model changes so it makes sense that the coefficients change too.

Field Experiments and Their Critics

Seven years ago I was contacted by Dawn Teele, who was then a graduate student and is now a professor of political science, and asked for my comments on an edited book she was preparing on social science experiments and their critics. I responded as follows: This is a great idea for a project. My impression was that Angus Deaton is in favor of observational rather than experimental analysis; is this not so? If you want someone technical, you could ask Ed Vytlacil; he’s at Yale, isn’t he? I think the strongest arguments in favor of observational rather than experimental data are: (a) Realism in causal inference. Experiments–even natural experiments–are necessarily artificial, and there are problems in generalizing beyond them to the real world. This is a point that James Heckman has made. (b) Realism in research practice. Experimental data…
Original Post: Field Experiments and Their Critics

Fragility index is too fragile

Fragility index is too fragile Posted by Andrew on 3 January 2017, 9:53 am Simon Gates writes: Where is an issue that has had a lot of publicity and Twittering in the clinical trials world recently. Many people are promoting the use of the “fragility index” (paper attached) to help interpretation of “significant” results from clinical trials. The idea is that it gives a measure of how robust the results are – how many patients would have to have had a different outcome to render the result “non-significant”. Lots of well-known people seem to be recommending this at the moment; there’s a website too (http://fragilityindex.com/ , which calculates p-values to 15 decimal places!). I’m less enthusiastic. It’s good that problems of “statistical significance” are being more widely appreciated, but the fragility index is still all about “significance”, and we really need…
Original Post: Fragility index is too fragile

Two unrelated topics in one post: (1) Teaching useful algebra classes, and (2) doing more careful psychological measurements

Kevin Lewis and Paul Alper send me so much material, I think they need their own blogs. In the meantime, I keep posting the stuff they send me, as part of my desperate effort to empty my inbox. 1. From Lewis: “Should Students Assessed as Needing Remedial Mathematics Take College-Level Quantitative Courses Instead? A Randomized Controlled Trial,” by A. W. Logue, Mari Watanabe-Rose, and Daniel Douglas, which begins: Many college students never take, or do not pass, required remedial mathematics courses theorized to increase college-level performance. Some colleges and states are therefore instituting policies allowing students to take college-level courses without first taking remedial courses. However, no experiments have compared the effectiveness of these approaches, and other data are mixed. We randomly assigned 907 students to (a) remedial elementary algebra, (b) that course with workshops, or (c) college-level statistics with…
Original Post: Two unrelated topics in one post: (1) Teaching useful algebra classes, and (2) doing more careful psychological measurements

“The Pitfall of Experimenting on the Web: How Unattended Selective Attrition Leads to Surprising (Yet False) Research Conclusions”

“The Pitfall of Experimenting on the Web: How Unattended Selective Attrition Leads to Surprising (Yet False) Research Conclusions” Posted by Andrew on 29 December 2016, 9:55 am Kevin Lewis points us to this paper by Haotian Zhou and Ayelet Fishbach, which begins: The authors find that experimental studies using online samples (e.g., MTurk) often violate the assumption of random assignment, because participant attrition—quitting a study before completing it and getting paid—is not only prevalent, but also varies systemically across experimental conditions. Using standard social psychology paradigms (e.g., ego-depletion, construal level), they observed attrition rates ranging from 30% to 50% (Study 1). The authors show that failing to attend to attrition rates in online panels has grave consequences. By introducing experimental confounds, unattended attrition misled them to draw mind-boggling yet false conclusions: that recalling a few happy events is considerably more effortful…
Original Post: “The Pitfall of Experimenting on the Web: How Unattended Selective Attrition Leads to Surprising (Yet False) Research Conclusions”

Ethics and statistics

elin says: Link? I was thinking about how I find the attention to measurement in statistics education pretty impressive compared to some other fields. My social science department uses the LOCUS for before and after in our quantitative analysis course and it’s been really helpful, and I think the ARTIST items are also overall quite good. I’m always surprised to hear people aren’t taking advantage of them. Just digging through patterns of error in the pretest has helped us rethink some things. I think along with physics, statistics has a lot of the best research on how to teach and how students learn. This could be because of the math education people, perhaps. Causeweb and JSE are both quite good as well. I think you have a nice post that supports what the research on teaching and learning statistics supports:…
Original Post: Ethics and statistics