Suppose we randomly pull two agents from a population and ask them to observe an unfolding, infinite sequence of zeros and ones. If each agent starts with a prior belief about the true sequence and updates this belief on revelation of successive observations, what is the chance that the two agents will come to agree on the likelihood that the next draw is a one? In this paper we show that there is no chance. More formally, we show that under a very unrestrictive definition of what it means to draw priors “randomly,” the probability that two priors have any chance of weakly merging is zero. Indeed, almost surely, the two measures will be singular--one prior will think certain a set of sequences that the other thinks impossible, and vice versa. Our result is meant as a critique of the “rational learning” literature, which seeks positive convergence results on infinite product spaces by augmenting the process of Bayesian updating with seeming regularity conditions, variously labeled “consistency” or “compatibility” assumptions. Our object is to investigate just how regular these assumption and results are when considered in the space of all possible prior distributions. Our results on the genericity of nowhere weak merging and singularity speak not just to the specific assumptions and results that appear in the literature, but to the “rational learning” approach generally. We call instead for a different approach to learning, one that recognizes the necessity of genuine, substantive restrictions on beliefs and proposes “extra rational” restrictions that are explicitly grounded in our best understanding of human behavior, ideally gleaned from experimental data. Available for download at http://ssrn.com/author=2205
Miller, Ronald I. and Sanchirico, Chris William, "Almost Everybody Disagrees Almost All the Time: The Genericity of Weakly Merging Nowhere" (1997). Faculty Scholarship at Penn Law. 2.