I’ve said it before and I’ll say it again

I’ve said it before and I’ll say it again Posted by Andrew on 10 January 2017, 4:02 pm Ryan Giordano, Tamara Broderick, and Michael Jordan write: In Bayesian analysis, the posterior follows from the data and a choice of a prior and a likelihood. One hopes that the posterior is robust to reasonable variation in the choice of prior, since this choice is made by the modeler and is often somewhat subjective. A different, equally subjectively plausible choice of prior may result in a substantially different posterior, and so different conclusions drawn from the data. . . . To which I say: ,s/choice of prior/choice of prior and data model/g Yes, the choice of data model (from which comes the likelihood) is made by the modeler and is often somewhat subjective. In those cases where the data model is not chosen…
Original Post: I’ve said it before and I’ll say it again

Nooooooo, just make it stop, please!

Dan Kahan wrote: You should do a blog on this. I replied: I don’t like this article but I don’t really see the point in blogging on it. Why bother? Kahan: BECAUSE YOU REALLY NEVER HAVE EXPLAINED WHY. Gelman-Rubin criticque of BIC is not responsive; you have something in mind—tell us what, pls! Inquiring minds what to know. Me: Wait, are you saying it’s not clear to you why I should hate that paper?? Kahan: YES!!!!!!! Certainly what say about “model selection” aspects of BIC in Gelman-Rubin don’t apply. Me: OK, OK. . . . The paper is called, Bayesian Benefits for the Pragmatic Researcher, and it’s by some authors whom I like and respect, but I don’t like what they’re doing. Here’s their abstract: The practical advantages of Bayesian inference are demonstrated here through two concrete examples. In the…
Original Post: Nooooooo, just make it stop, please!

Steve Fienberg

I did not know Steve Fienberg well, but I met him several times and encountered his work on various occasions, which makes sense considering his research area was statistical modeling as applied to social science. Fienberg’s most influential work must have been his books on the analysis of categorical data, work that was ahead of its time in being focused on the connection between models rather than hypothesis tests. He also wrote, with William Mason, the definitive paper on identification in age-period-cohort models, and he worked on lots of applied problems including census adjustment, disclosure limitation, and statistics in legal settings. The common theme in all this work is the combination of information from multiple sources, and the challenges involved in taking statistical inferences using these to make decisions in new settings. These ideas of integration and partial pooling are…
Original Post: Steve Fienberg

Designing an animal-like brain: black-box “deep learning algorithms” to solve problems, with an (approximately) Bayesian “consciousness” or “executive functioning organ” that attempts to make sense of all these inferences

The journal Behavioral and Brain Sciences will be publishing this paper, “Building Machines That Learn and Think Like People,” by Brenden Lake, Tomer Ullman, Joshua Tenenbaum, and Samuel Gershman. Here’s the abstract: Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from using deep neural networks trained end-to-end in tasks such as object recognition, video games, and board games, achieving performance that equals or even beats humans in some respects. Despite their biological inspiration and performance achievements, these systems differ from human intelligence in crucial ways. We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends in both what they learn, and how they learn it. Specifically, we argue that these machines should (a) build…
Original Post: Designing an animal-like brain: black-box “deep learning algorithms” to solve problems, with an (approximately) Bayesian “consciousness” or “executive functioning organ” that attempts to make sense of all these inferences

Bayesian statistics: What’s it all about?

Kevin Gray sent me a bunch of questions on Bayesian statistics and I responded. The interview is here at KDnuggets news. For some reason the KDnuggets editors gave it the horrible, horrible title, “Bayesian Basics, Explained.” I guess they don’t waste their data mining and analytics skills on writing blog post titles! That said, I like a lot of the things I wrote, so I’ll repeat the material (with some slight reorganization) here: What is Bayesian statistics? Bayesian statistics uses the mathematical rules of probability to combine data with prior information to yield inferences which (if the model being used is correct) are more precise than would be obtained by either source of information alone. In contrast, classical statistical methods avoid prior distributions. In classical statistics, you might include in your model a predictor (for example), or you might exclude…
Original Post: Bayesian statistics: What’s it all about?

Avoiding only the shadow knowing the motivating problem of a post.

Graphic From Given I am starting to make some posts to this blog (again) I was pleased to run across a youtube of Xiao-Li Meng being interviewed on the same topic by Suzanne Smith the Director of the Center for Writing and Communicating Ideas. One thing I picked up was to make the problem being addressed in a any communication very clear as there should be a motivating problem – the challenges of problem recognising and problem defining should not be over looked. The other thing was that the motivating problem should be located in the sub-field(s) of statistics that addresses such problems. The second is easier as my motivating problems mostly involve ways to better grasp insight(s) from theoretical statistics in order to better apply statistics in applications – so the sub-fields are theory and application, going primarily from theory…
Original Post: Avoiding only the shadow knowing the motivating problem of a post.

Avoiding selection bias by analyzing all possible forking paths

Avoiding selection bias by analyzing all possible forking paths Posted by Andrew on 12 December 2016, 9:34 am Ivan Zupic points me to this online discussion of the article, Dwork et al. 2015, The reusable holdout: Preserving validity in adaptive data analysis. The discussants are all talking about the connection between adaptive data analysis and the garden of forking paths; for example, this from one commenter: The idea of adaptive data analysis is that you alter your plan for analyzing the data as you learn more about it. . . . adaptive data analysis is typically how many researchers actually conduct their analyses, much to the dismay of statisticians. As such, if one could do this in a statistical valid manner, it would revolutionize statistical practice. Just about every data analysis I’ve ever had is adaptive, and I do think most…
Original Post: Avoiding selection bias by analyzing all possible forking paths

“The Fundamental Incompatibility of Scalable Hamiltonian Monte Carlo and Naive Data Subsampling”

“The Fundamental Incompatibility of Scalable Hamiltonian Monte Carlo and Naive Data Subsampling” Posted by Andrew on 9 December 2016, 7:37 pm Here’s Michael Betancourt writing in 2015: Leveraging the coherent exploration of Hamiltonian flow, Hamiltonian Monte Carlo produces computationally efficient Monte Carlo estimators, even with respect to complex and high-dimensional target distributions. When confronted with data-intensive applications, however, the algorithm may be too expensive to implement, leaving us to consider the utility of approximations such as data subsampling. In this paper I demonstrate how data subsampling fundamentally compromises the scalability of Hamiltonian Monte Carlo. But then here’s Jost Springenberg, Aaron Klein, Stefan Falkner, and Frank Hutter in 2016: Despite its successes, the prototypical Bayesian optimization approach – using Gaussian process models – does not scale well to either many hyperparameters or many function evaluations. Attacking this lack of scalability and flexibility…
Original Post: “The Fundamental Incompatibility of Scalable Hamiltonian Monte Carlo and Naive Data Subsampling”

Using Stan in an agent-based model: Simulation suggests that a market could be useful for building public consensus on climate change

Jonathan Gilligan writes: I’m writing to let you know about a preprint that uses Stan in what I think is a novel manner: Two graduate students and I developed an agent-based simulation of a prediction market for climate, in which traders buy and sell securities that are essentially bets on what the global average temperature will be at some future time. We use Stan as part of the model: at every time step, simulated traders acquire new information and use this information to update their statistical models of climate processes and generate predictions about the future. J.J. Nay, M. Van der Linden, and J.M. Gilligan, Betting and Belief: Prediction Markets and Attribution of Climate Change, (code here). ABSTRACT: Despite much scientific evidence, a large fraction of the American public doubts that greenhouse gases are causing global warming. We present a…
Original Post: Using Stan in an agent-based model: Simulation suggests that a market could be useful for building public consensus on climate change

Interesting epi paper using Stan

Jon Zelner writes: Just thought I’d send along this paper by Justin Lessler et al. Thought it was both clever & useful and a nice ad for using Stan for epidemiological work. Basically, what this paper is about is estimating the true prevalence and case fatality ratio of MERS-CoV [Middle East Respiratory Syndrome Coronavirus Infection] using data collected via a…
Original Post: Interesting epi paper using Stan