Inferential Models Book

Purpose of this site
To supplement the book, I'm posting some relevant materials here, including things done after the book was completed. Beyond just a supplement, there are still many questions to be answered. In particular, while I am mostly satisfied with the answers we gave in the book to the questions what is validity? and how to construct a valid IM?, I am unsatisfied with the answers we gave to questions like why validity? When these ideas were being developed, it was clear to me, for example, why the validity property is important and worth giving up the comfort of additive probability for. In discussing these ideas with others, however, I found that it was not so clear to them. So, in recent years, I have mostly focused on this question of why?, and here I want to collect those developments to help complete the IM story.
This site is intended to be a workinprogress. I'll post new developments here as they come available and make updates or revisions when I think there are better ways to do or explain things. When possible, I'll also mention some relevant unanswered questions or open problems.
News
 02/04/2021: I'm organizing a virtual workshop on Bayes, Fiducial, and Frequentist (BFF) Inference, taking place on Friday, February 5th, 2021. For more information, please go to https://researchers.one/conferences/bff65. Slides for my talk on imprecise probability and frequentist inference are here.

01/26/2021: I will be giving a (virtual) short course on the IM developments at the Society of Imprecise Probability Theory and Applications (SIPTA) online school being organized by the Institute for Risk and Uncertainty at the University of Liverpool, UK. The conference website is here and my talks are scheduled for January 27th and 28th, 2021. Slides for my lectures are here.

11/17/2020: Robin Gong and XiaoLi Meng's Judicious judgment meets unsettling updating: Dilation, sure loss, and Simpson's paradox is scheduled to appear as a discussion paper in Statistical Science. Chuanhai Liu and I were invited as discussants and our comments are here. We make use of the recentlydeveloped connections between validity and coherence (see below) to show how the IM framework can help to settle some of the unsettling phenomena Gong and Meng identified.

11/17/2020: An updated version of my paper (with Leo Cella) on validity, conformal prediction, and IMs is available here. We make some interesting connections between validity, which is a frequentist performancebased criterion, and coherence, which is a purely subjective notion, critical to the imprecise probability developments of Walley and others. To my knowledge, these are the first such connections between validity and coherence, but there's still much more to do.
Recent developments
False Confidence Theorem. It is know that Fisher's fiducial distributions only assign meaningful probabilities to certain assertions about the unknowns. The same is true for other datadependent distributions, including generalized fiducial distributions, confidence distributions, and defaultprior Bayesian posterior distributions. However, I'm not aware of any explanation of what's causing the problem. The recently established false confidence theorem states that, for any datadependent probability distribution, there always exists assertions which are false (i.e., don't contain the true value of the unknown) that tend to be assigned high probability. Since high probability being assigned to a false assertion can lead to erroneous inference, this false confidence phenomenon is problematic. Interestingly, the problem is not a result of poor choices of prior, etc.—the culprit is additivity. Therefore, to avoid false confidence, uncertainty must be quantified using imprecise probability, e.g., through a valid IM.
 R. Martin (2019). False confidence, nonadditive beliefs, and valid statistical inference. International Journal of Approximate Reasoning. [arXiv] [researchers.one]
 M. S. Balch, R. Martin, and S. Ferson (2019). Satellite conjunction analysis and the false confidence theorem. Proceedings of the Royal Society, Series A. [arXiv]
 C. Cunen, N. Hjort, and T. Schweder (2020). Comment: Confidence in confidence distributions!
 R. Martin, M. Balch, and S. Ferson (202x). Response to the comment. [researchers.one]
Alternative notions of validity. In the context of prediction, since there are no (nontrivial) assertion about a "next observation" that are necessarily true/false, a modified definition of validity is required. In Cella and M. below, we develop a new notion of validity that is both intuitive and appropriate for the prediction setting. Moreover, it can be applied to inference problems as well, generalizing the original notion of validity, showing when (generalized) Bayes can be valid, and making connections to other concepts such as coherence as in de Finetti and Walley. I think this is a promising new direction to pursue.
 C. Liu and R. Martin (2020). Comment on Gong and Meng's "Judicious judgment meets unsettling updating: Dilation, sure loss, and Simpson's paradox." [pdf]
 L. Cella and R. Martin (2020). Strong validity, consonance, and conformal prediction. [researchers.one] [arXiv]
Connections to imprecise probability. When the IM developments were being made, with the exception of Dempster's and Shafer's work, I was mostly unaware of what had been done in the imprecise probability community. I've recently made this connection and I think there are some promising developments on the horizon. For example, the properties of socalled possibility measures seem especially suited to statistical inference tasks, and the paper below highlights the connection between IMs and possibility measures. More developments in this direction are expected.
 C. Liu and R. Martin (2020). Inferential models and possibility measures. [researchers.one] [arXiv]
 R. Martin (2019). False confidence, nonadditive beliefs, and valid statistical inference. International Journal of Approximate Reasoning. [arXiv] [researchers.one]
Handling partial prior information. An apparent advantage to the Bayesian formulation is that it can incorporate available prior information into the analysis in a relatively straightforward way. To take advantage of this, one needs a precise prior probability distribution for every unknown, but that's just not possible in every application. Simply plugging in a default prior in place of a real one is not the answer, since an "incorrectly specified" prior can lead to invalid inference. What's needed is a framework that can incorporate real prior information about the quantities for which it's available, and leave priors unspecified everywhere else. Imprecise probability theory is specifically designed for such cases, so I expect this to be an advantage for IMs. Some relevant references are below:
 L. Cella and R. Martin (2019). Incorporating expert opinion in an inferential model while retaining validity. Proceedings of the 11th International Symposium on Imprecise Probabilities: Theories and Applications.
 R. Martin (2019). On valid uncertainty quantification about a model. Proceedings of the 11th International Symposium on Imprecise Probabilities: Theories and Applications. [researchers.one]
 Y. Qiu, L. Zhang, C. Liu (2018). Exact and efficient inference for partial Bayes problems. Electronic Journal of Statistics.
 R. Martin, H. Xu, Z. Zhang, and C. Liu (2016). Valid uncertainty quantification about the model in linear regression. [arXiv]
Previous developments
Under construction...