Abstract
I present an account of deterministic chance which builds upon the physico-mathematical approach to theorizing about deterministic chance known as the method of arbitrary functions. This approach promisingly yields deterministic probabilities which align with what we take the chances to be—it tells us that there is approximately a 1/2 probability of a spun roulette wheel stopping on black, and approximately a 1/2 probability of a flipped coin landing heads up—but it requires some probabilistic materials to work with. I contend that the right probabilistic materials are found in reasonable initial credence distributions. I note that, with some rather weak normative assumptions, the resulting account entails that deterministic chances obey a variant of Lewis’s ‘principal principle’. I additionally argue that deterministic chances, so understood, are capable of explaining long-run frequencies.
Similar content being viewed by others
Notes
Strictly speaking, a compatibilist needn’t say this. They could say that the chance is 2/3, or \(\pi \)/4, or even 0 or 1. All it takes to be a compatibilist is to assign some proposition a non-trivial chance in a deterministic world.
See, for instance, Schaffer (2007).
Following philosophical tradition, I reserve the word ‘chance’ for objective probabilities. So, when I call the probabilistic features of deterministic systems ‘chances’, I mean to place those probabilities on the objective side of an objective-subjective dichotomy. It may be that the reader disagrees with me about whether there are deterministic chances because they and I disagree about where to draw the line between objective and subjective. This is not first and foremost a disagreement about the nature of the probabilistic features of deterministic systems. It is rather first and foremost a disagreement about how to use our terms. Others, like Schaffer (2007) and Bradley (2017), will think that chance is whatever plays (well enough) certain theoretical roles like constraining rational credence and explaining frequencies. Part of my goal here is to demonstrate that the probabilistic features ascribed to deterministic systems are capable of playing these kinds of theoretical roles.
I take the existence of this machine to decisively settle, in the negative, Lewis (1986, p. 119)’s question of whether quantum mechanical chance will infect the tossing of a coin to a degree sufficient to render the tychistic chance of heads 1/2.
Laplace (1814, p. 6).
When I say that the die is ‘unfair’, I mean that it is not the case that the chance that one side land up is the same as the chance that any other side land up.
A note on the presentation: the method of arbitrary functions has a long and storied history. My goal in this section is not to give anything like an adequate introduction to the historical development of these ideas, but rather to simply present them in the form that is currently fashionable. See Strevens (2003, §2.A) and von Plato (1983) for accessible introductions to this historical development. See Engel (1992) for a more technical historical introduction.
That is to say: \(\lim _{x \rightarrow \infty } \int _\mathbf {B} f(V - x) dV = \text{1/2 }\).
See Keller (1986), in which he determines the probability of heads as the initial upwards and angular velocities go to infinity, supposing that the initial distribution is absolutely continuous.
See Strevens (2011, §3).
Following Keller, I set the coin’s initial height equal to its radius in order to simplify the math. Nothing significant changes if we vary the coin’s initial height.
For a discussion of the application of the method of arbitrary functions to several other chance processes, see Engel (1992) and Strevens (2003, 2013). For a development of Keller’s analysis which incorporates precession and shows surprisingly that a flipped coin is slightly more likely to land on whichever face was up initially (\(\approx \) 51%), see Diaconis et al. (2007).
I’m using ‘\(:=\)’, rather than ‘\(=\)’, to emphasize that the value of O is determined by the values of \(C_1 \ldots C_N\), and not vice versa. \(:=\) is therefore, not a symmetric relation like \(=\).
‘\(a ~\mathrm {mod} ~b\)’ is the remainder when a is divided by b.
A few words on notation. Throughout, I will use ‘\(\phi _O\)’ to stand for both the function on the right-hand-side of a dynamical equation \(O := \phi _O(C_1, C_2, \ldots , C_N)\), and also the entire dynamical equation itself. Also, I will use ‘f’ indiscriminately to denote both a probability density function (if the variables over which it is defined are continuous), a probability mass function (if the variables over which it is defined are discrete), and the probability function determined by such pdfs or pmfs.
I will be assuming throughout that rational credences are probabilities.
As I will be understanding the term ‘microconstant’, the proportion here is calculated with the Lebesgue measure, corresponding to the intuitive length of a set of points in \(\mathbb {R}\), the intuitive area of a set of points in \(\mathbb {R}^2\), the intuitive volume of a set of points in \(\mathbb {R}^3\), etc. For a more careful definition of Lebesgue measure, see Bartle (1966).
Of course, Strevens need not, and does not, claim that they are. maf provides a sufficient, and not a necessary, condition for the existence of a deterministic chance.
The proportion here is calculated with the Lebesgue measure. See footnote 25.
See, for instance, Woodward (2016).
From the perspective of fundamental physics, the variables used to describe many social and biological systems will appear quite unnatural. Might we then expect some proportion-altering transformation \(\zeta \) to deliver variables just as natural as \(C_1, \ldots , C_N\) themselves? Perhaps, though I’m inclined to think not; for I don’t think that the naturalness of high-level variables is to be judged by reducing such variables to the quantities of fundamental physics. So I’m inclined to think that the standard variables used in higher-level sciences are themselves rather natural, and the proportion-altering transformations of them rather unnatural.
A function like \(\phi _P\) will only be microconstant if we choose an appropriate a and b. To persuade yourself that this will work out for some choice of a and b, I recommend playing around in Mathematica. For instance, to get a sense of how functions like this behave, you can set \(a=9245.8698\) and \(b=6.282\). Then, define the function f[x_] := Mod[a*x + b, 1], which says what \(S_{n+1}\) will be, given the input \(S_n\). To see what this function produces when iterated 300 times, define g[x_] := Nest[f,x,300]. Then, to see whether the output is between 0.222 and 0.223, define h[x_] := If[x \(\ge \) 0.222&& x \(\le \) 0.223, 1, 0]. For a nice visual representation of which initial seeds lead to a payout on the 300th pull, use DiscretePlot[h[g[x]], \(\{\texttt {x, 0, 1, 0.00001}\}\)] to sample 100,000 real numbers between 0 and 1, at steps of 0.00001, and see whether they lead to a payout on the 300th pull. You’ll see that about 100 of them do, and those 100 are randomly distributed over the interval between 0 and 1.
See Rosenthal (2012, p. 231), who suggests (but does not ultimately endorse—for reasons unrelated to those in the body above) that, when we model cases like slot machine with proximate dynamical equations like \(\phi _{P}\), we have “overlooked a nomological factor relevant to the appearance of initial states and thus (indirectly) for the experimental outcomes. If we step further back and look how the initial states themselves come about, we should be able to discover this additional factor and re-model the experimental situation, this time explicitly paying attention to the neglected nomological influence...at some point, when we had taken all nomological factors relevant to the experimental result into account, we would finally arrive at a space in application to which [maf yields] the correct outcome probabilities.”
Savage (1971, pp. 420–21, with slight notational changes).
“The conclusion does not apply at all to a person who feels quite sure of the second decimal of V.” (Savage 1971, p. 421), with notational changes.
See von Plato (1983, p. 42).
Why would they be unreasonable? I’m inclined to treat this as a datum for epistemology (pace radical subjectivist Bayesians), but we could justify it by appealing to a general normative principle like the following: your credences shouldn’t strongly discriminate between very similar possibilities unless you have evidence which discriminates between these possibilities. (This is a strictly weaker principle than the principle of indifference (POI), and one that doesn’t succumb to the usual objections to POI).
Here, I am using ‘indeterminate’ in the sense that has become common in the literature on vagueness. You could be an epistemicist about this kind of indeterminacy, in which case you would understand me as saying: there is some one rational credence to adopt in any proposition in the absense of evidence—though nobody can know what it is. Or you may think that this kind of indeterminacy is due to an unsettledness in the way we use language, or that there’s something genuinely unsettled about normative reality. See Williamson (1994) for more on different theories of indeterminacy.
There is a common framework for representing indeterminate probabilities like these (see van Fraassen (1990, 2006), Levi (1974), Walley (1991), Joyce (2010), and White (2009)). In this framework, we would take all the admissible candidates for f and gather them into a set, call it ‘\(\mathcal {F}\)’. We would then use \(\mathcal {F}\) to represent a reasonable initial doxastic state. The probabilities included in \(\mathcal {F}\) are akin to the admissible precisifications in supervaluationist theories of vagueness (see Fine (1975) and Keefe (2000)). While a supervaluationist keeps these admissible precisifications in their metalinguistic interpretation of a theory, the imprecise probabilist uses them in their first-order theorizing. From my perspective, it is better to handle indeterminacy with respect to reasonable credence in the same way that other indeterminacy is handled, and to keep the admissible precisifications in the meta-language.
To appreciate the distinction in scope here, consider the sorites argument. If you accept that classical logic is determinately true, then you’ll accept that it is determinately the case that there is an n such that n is the least number of grains that makes a heap. However, if you think that it’s indeterminate when some grains go from a heap to a non-heap, then you’ll deny that there is any number n which is determinately the least number of grains which makes a heap. Similarly, I am suggesting that it’s determinately the case that there is one and only one rational credence to adopt in the absence of evidence; though it’s indeterminate which credence it is, so there is no credence which is determinately the one and only rational credence to adopt in the absence of evidence.
A comment on notation: I will denote the conjunction of propositions p and q with both ‘\(p \wedge q\)’ and ‘pq’.
When principles like enkratic principle show up in the literature—e.g., in Elga (2013)—they are often formulated with a proposition like \(\llbracket f = \mathrm {f} \rrbracket \), which says that \(\mathrm {f}\) is a reasonable initial credence function. Then, authors state the enkratic requirement as follows: \(f(A \mid \llbracket f = \mathrm {f} \rrbracket ) = \mathrm {f}(A)\). The enkratic principle in the body is strictly weaker than this principle.
See Titelbaum (2014), who argues for this view from a principle slightly stronger than what I have named enkratic principle in the body. Throughout, by ‘a priori requirements of rationality’, I just mean the constraints which rationality places on our doxastic states in the absence of evidence.
Admissibility is defined relative to bodies of total evidence; for, given this definition of admissibility, admissibility need not agglomerate—simply because \(E_1\) is admissible and \(E_2\) is admissible, this needn’t mean that \(E_1 \wedge E_2\) is admissible.
Or, rather, an “almost sufficient” condition—a qualification Lewis included due to worries about news from the future.
Lewis (1994, p. 484)
See Lewis (1980, pp. 285–287).
Of course, in order to come to know that the deterministic chance of a coin landing heads is 1/2, we must antecedently know that there is a deterministic chance that the coin lands heads.
An interesting case to consider arises when we are not uncertain about the dynamics, but we are uncertain about the requirements of rationality. If we take such cases to be possible then we could acquire a posteriori confirmation of normative propositions about the requirements of rationality by observing frequency data.
This only holds if you know that exactly one of the pre-selected seeds leads to a payout on the 300th pull. Without this knowledge, you will not know about the 1/6th chance determined by \(\llbracket \phi _P \circ \phi _S \rrbracket \). In that case, the particular deterministic principal principle will tell you that you should have a credence of 1/1000 that the machine pays out.
Cf. Carroll (1895).
Loewer is considering an account predicated on the principle of indifference, but his objections apply with equal force to the subjectivist account.
Lewis (1994, p. 483).
Shouldn’t we still want there to be some connection between chance and frequency? Of course we should. And there is: as the number of independent trials gets larger, so too does the chance that the frequency of an outcome is close to the chance of that outcome. This is the law of large numbers (stated roughly). It is the most that we should ever want an account of chance to say about the connection between frequency and chance; for it is the most that is true about the connection between chance and frequency. And the subjectivist account says it.
References
Abrams, M. (2012). Mechanistic probability. Synthese, 187(2), 343–375.
Albert, D. Z. (2000). Time and chance. Cambridge: Harvard University Press.
Albert, D. Z. (2015). After physics. Cambridge: Harvard University Press.
Bartle, R. G. (1966). The elements of integration and lebesgue measure. New York: Wiley. Wiley Classics Library Edition.
Beisbart, C. (2016). A Humean guide to Spielraum probabilities. Journal for General Philosophy of Science, 47, 189–216.
Bradley, S. (2017). Are objective chances compatible with determinism? Philosophy Compass, 12(8), e12430.
Butterfield, J. (2011). Less is different: Emergence and reduction reconciled. Foundations of Physics, 41(6), 1065–1135.
Carroll, L. (1895). What the tortoise said to Achilles. Mind, 4(14), 278–280.
Clark, P. (1987). Determinism and probability in physics. Proceedings of the Aristotelian Society, Supplementary, 61, 185–210.
de Finetti, B. (1974). Theory of probability (Vol. 1). New York: Wiley.
Diaconis, P., Holmes, S., & Montgomery, R. (2007). Dynamical bias in the coin toss. SIAM Review, 49(2), 211–235.
Elga, A. (2013). The puzzle of the unmarked clock and the new rational reflection principle. Philosophical Studies, 164(1), 127–139.
Engel, E. (1992). A road to randomness in physical systems (Vol. 71). Berlin: Springer. Lecture notes in statistics.
Fine, K. (1975). Vagueness, truth, and logic. Synthese, 30, 265–300.
Glynn, L. (2010). Deterministic chance. The British Journal for the Philosophy of Science, 61(1), 51–80.
Greco, D. (2014). A puzzle about epistemic akrasia. Philosophical Studies, 167, 201–219.
Hempel, C. (1965). Aspects of scientific explanation and other essays in the philosophy of science. New York: Free Press.
Hoefer, C. (2007). The third way of objective probability: A skeptic’s guide to objective chance. Mind, 463, 549–596.
Hopf, E. (1934). On causality, statistics, and probability. Journal of Mathematics and Physics, 13, 51–102.
Horowitz, S. (2014). Epistemic akrasia. Noûs, 48(4), 718–744.
Ismael, J. (2009). Probability in deterministic physics. Journal of Philosophy, 106, 89–109.
Joyce, J. M. (2010). A defense of imprecise credences in inference and decision making. Philosophical Perspectives, 24(1), 281–323.
Keefe, R. (2000). Theories of vagueness. Cambridge: Cambridge University Press.
Keller, J. B. (1986). The probability of heads. The American Mathematical Monthly, 93(3), 191–197.
Laplace, M. D. (1814). A philosophical essay on probabilities (Translated from the 6th French Edition by F.W. Truscatt, & F.L. Emory). Dover Publications, New York.
Lasonen-Aarnio, M. (2014). Higher-order evidence and the limits of defeat. Philosophy and Phenomenological Research, 88(2), 314–345.
Lasonen-Aarnio, M. (2015). New rational reflection and internalism about rationality. Oxford studies in epistemology (Vol. 5, pp. 145–171). Oxford: Oxford University Press.
Lasonen-Aarnio, M. (forthcoming). Enkrasia or evidentialism? Learning to love mismatch. Philosophical Studies.
Levi, I. (1974). On indeterminate probabilities. Journal of Philosophy, 71, 391–418.
Lewis, D. K. (1980). A subjectivist’s guide to objective chance. In R. C. Jeffrey (Ed.), Studies in inductive logic and probability (Vol. II, pp. 263–293). Berkeley: University of California Press.
Lewis, D. K. (1986). A subjectivist’s guide to objective chance. In: Philosophical papers (Vol. II). Oxford: Oxford University Press.
Lewis, D. K. (1994). Humean supervenience debugged. Mind, 103(412), 473–490.
Loewer, B. (2001). Determinism and chance. Studies in the History and Philosophy of Modern Physics, 32(4), 609–620.
Loewer, B. (2004). David Lewis’s Humean theory of objective chance. Philosophy of Science, 71, 1115–1125.
Loewer, B. (2007). Counterfactuals and the second law. In H. Price & R. Corry (Eds.), Causation, physics, and the constitution of reality: Russell’s republic revisited, Chap 11 (pp. 293–326). Oxford: Oxford University Press.
Myrvold, W. C. (2012). Deterministic laws and epistemic chances. In Y. Ben-Menahem & M. Hemmo (Eds.), Probability in physics (pp. 73–85). Berlin: Springer.
Poincaré, H. (1905). Science and hypothesis, Chap 11: The calculus of probabilities. New York: The Walter Scott Publishing Company.
Poincaré, H. (1912). Calcul des probabilités (2nd ed.). Paris: Gauthier-Villars.
Popper, K. (1982). Quantum theory and the schism in physics. New Jersey: Rowman and Littlefield.
Ramsey, F. P. (1931). Truth and probability. In R. Braithwaite (Ed.), Foundations of mathematics and other logical essays, chap VII (pp. 156–198). London: Kegan, Paul, Trench, Trubner & Co., Ltd.
Reichenbach, H. (1971). The theory of probability: An Inquiry into the logical and mathematical foundations of the calculus of probabilities (2nd ed.). Berkeley: University of California Press.
Roberts, J. T. (2016). The range conception of probability and the input problem. Journal for General Philosophy of Science, 47, 171–188.
Rosenthal, J. (2010). The natural-range conception of probability. In G. Ernst & A. Hüttemann (Eds.), Time, chance, and reduction: Philosophical aspects of statistical mechanics (pp. 71–91). Cambridge: Cambridge University Press.
Rosenthal, J. (2012). Probabilities as ratios of ranges in initial-state spaces. Journal of Logic, Language and Information, 21, 217–236.
Rosenthal, J. (2016). Johannes con Kries’s range conception, the method of arbitrary functions and related modern approaches to probability. Journal for General Philosophy of Science, 47, 151–170.
Savage, L. J. (1954). The foundations of statistics (2nd ed.). New York: Dover Publications.
Savage, L. J. (1971). Probability in science: A personalistic account. In P. Suppes, L. Henkin, A. Joja, & G. C. Moisil (Eds.), Logic, methodology, and philosophy of science (Vol. 4, pp. 417–428). Amsterdam: North-Holland Publishing Company.
Schaffer, J. (2007). Deterministic chance? The British Journal for the Philosophy of Science, 58, 113–140.
Sober, E. (2010). Evolutionary theory and the reality of macro probabilities. In E. Eells & J. H. Fetzer (Eds.), The place of probability in science (pp. 133–161). Dordrecht: Springer.
Strevens, M. (2000). Do large probabilities explain better? Philosophy of Science, 67, 366–390.
Strevens, M. (2003). Bigger than chaos: Understanding complexity through probability. Cambridge: Harvard University Press.
Strevens, M. (2011). Probability out of determinism. In C. Beisbart & S. Hartmann (Eds.), Probability in physics (pp. 339–364). Oxford: Oxford University Press.
Strevens, M. (2013). Tychomancy. Cambridge: Harvard University Press.
Titelbaum, M. G. (2014). Rationality’s fixed point (or: In defense of right reason). Oxford Studies in Epistemology, 5, 253–294.
van Fraassen, B. C. (1990). Figures in a probability landscape. In J. M. Dunn & A. Gupta (Eds.), Truth or consequences: Essays in honor of Nuel Belnap (pp. 345–356). Dordrecht: Kluwer Academic Publishers.
van Fraassen, B. C. (2006). Vague expectation value loss. Philosophical Studies, 127, 483–491.
von Kries, J. (1886). Principien der Wahrscheinlichkeitsrechnung, eine logische Untersuchung. Freiburg im Breisgau: Mohr.
von Plato, J. (1982). Probability and determinism. Philosophy of Science, 49(1), 51–66.
von Plato, J. (1983). The method of arbitrary functions. The British Journal for the Philosophy of Science, 34(1), 37–47.
Walley, P. (1991). Statistical reasoning with imprecise probabilities. London: Chapman & Hall.
White, R. (2005). Epistemic permissiveness. Philosophical Perspectives, 19, 445–459.
White, R. (2009). Evidential symmetry and mushy credence. Oxford Studies in Epistemology, 3, 161–186.
Williamson, T. (1994). Vagueness. London: Routledge.
Williamson, T. (2000). Knowledge and its limits. Oxford: Oxford University Press.
Williamson, T. (2011). Improbable knowing. In T. Dougherty (Ed.), Evidentialism and its discontents. Oxford: Oxford University Press.
Williamson, T. (2014). Very improbable knowing. Erkenntnis, 79(5), 971–999.
Woodward, J. (2016). The problem of variable choice. Synthese, 193(4), 1047–1072.
Zabell, S. (2016). Johannes von Kries’s Principien: A brief guide for the perplexed. Journal for General Philosophy of Science, 47, 131–150.
Acknowledgements
Thanks to Michael Caie, Cian Dorr, Daniel Drucker, Jeremy Goodman, Zoë Johnson King, Harvey Lederman, Jonathan Livengood, Japa Pallikkathayil, Bernhard Salow, James Shaw, Erica Shumener, Charles Sebens, Jack Spencer, Michael Strevens, Rohan Sud, Brad Weslake, two anonymous reviewers, and the Logic, Language, Metaphysics, and Mind Reading Group at MIT for helpful conversations about this material.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Gallow, J.D. A subjectivist’s guide to deterministic chance. Synthese 198, 4339–4372 (2021). https://doi.org/10.1007/s11229-019-02346-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-019-02346-y