Inferential Models Book

R. Martin and C. Liu (2015). Inferential Models: Reasoning with Uncertainty. Monographs in Statistics and Applied Probability Series, Chapman & Hall/CRC Press. [publisher] [amazon] [google]

Problems in data science involve data, models, prior information, and unknowns that are believed to be relevant to the phenomenon under investigation. The data scientist's goal is to convert this input into a meaningful quantification of uncertainty about those unknowns. An inferential model, or IM, is a mapping from these inputs to a quantification of uncertainty about the unknowns. The book is mainly about how to construct this mapping so that the output is guaranteed to be valid in a certain sense. Despite the efforts by Bayes, Fisher, and others to quantify uncertainty using ordinary probability, our notion of validity is incompatible with this. Therefore, the IM's output must be an imprecise probability and, in particular, random sets play an important role in our developments.

Purpose of this site

To supplement the book, I'm posting some relevant materials here, including things done after the book was completed. Beyond just a supplement, there are still many questions to be answered. In particular, while I am mostly satisfied with the answers we gave in the book to the questions what is validity? and how to construct a valid IM?, I am unsatisfied with the answers we gave to questions like why validity? When these ideas were being developed, it was clear to me, for example, why the validity property is important and worth giving up the comfort of additive probability for. In discussing these ideas with others, however, I found that it was not so clear to them. So, in recent years, I have mostly focused on this question of why?, and here I want to collect those developments to help complete the IM story.

This site is intended to be a work-in-progress. I'll post new developments here as they come available and make updates or revisions when I think there are better ways to do or explain things. When possible, I'll also mention some relevant unanswered questions or open problems.


Recent developments

False Confidence Theorem. It is know that Fisher's fiducial distributions only assign meaningful probabilities to certain assertions about the unknowns. The same is true for other data-dependent distributions, including generalized fiducial distributions, confidence distributions, and default-prior Bayesian posterior distributions. However, I'm not aware of any explanation of what's causing the problem. The recently established false confidence theorem states that, for any data-dependent probability distribution, there always exists assertions which are false (i.e., don't contain the true value of the unknown) that tend to be assigned high probability. Since high probability being assigned to a false assertion can lead to erroneous inference, this false confidence phenomenon is problematic. Interestingly, the problem is not a result of poor choices of prior, etc.—the culprit is additivity. Therefore, to avoid false confidence, uncertainty must be quantified using imprecise probability, e.g., through a valid IM.

Alternative notions of validity. In the context of prediction, since there are no (non-trivial) assertion about a "next observation" that are necessarily true/false, a modified definition of validity is required. In Cella and M. below, we develop a new notion of validity that is both intuitive and appropriate for the prediction setting. Moreover, it can be applied to inference problems as well, generalizing the original notion of validity, showing when (generalized) Bayes can be valid, and making connections to other concepts such as coherence as in de Finetti and Walley. I think this is a promising new direction to pursue.

Connections to imprecise probability. When the IM developments were being made, with the exception of Dempster's and Shafer's work, I was mostly unaware of what had been done in the imprecise probability community. I've recently made this connection and I think there are some promising developments on the horizon. For example, the properties of so-called possibility measures seem especially suited to statistical inference tasks, and the paper below highlights the connection between IMs and possibility measures. More developments in this direction are expected.

Handling partial prior information. An apparent advantage to the Bayesian formulation is that it can incorporate available prior information into the analysis in a relatively straightforward way. To take advantage of this, one needs a precise prior probability distribution for every unknown, but that's just not possible in every application. Simply plugging in a default prior in place of a real one is not the answer, since an "incorrectly specified" prior can lead to invalid inference. What's needed is a framework that can incorporate real prior information about the quantities for which it's available, and leave priors unspecified everywhere else. Imprecise probability theory is specifically designed for such cases, so I expect this to be an advantage for IMs. Some relevant references are below:

Previous developments

Under construction...