Search

Fine-tuning and the probability distribution on the space of physical constants

by Jakub Supeł

Introduction


The motivation and the starting point for the following post is the argument from the fine-tuning to the existence of God, a simple version of which has been popularized by William Lane Craig (https://www.reasonablefaith.org/finetuning/). Fine-tuning can be described as the fact that some physical constants (and initial conditions) lie within a relatively small range that allows for the development of any organized life[1].


Many have argued that such a coincidence is a priori extremely improbable. Some have proposed the multiverse theory as a solution, while others employed the hypothesis of a cosmic designer – but let us put aside this dilemma for now.


The inference from the fact of fine-tuning to low a priori probability of it has been challenged on the grounds that there is no obvious probability distribution on the space of physical constants. I think that it is a fair objection, at least on its face. In this post I will focus specifically on this objection.


My usual response was to argue that although we cannot justify any concrete probability distribution, we can at least roughly estimate the order of magnitude of the probability of fine-tuning by comparing the life-permitting range with the value of the constant; or with the range of values consistent with the theory. This approach takes for granted that the probability distribution is "smooth enough", which enables us to deduce the order of magnitude of the probability of fine-tuning. Any sharp peak within the life-permitting range would be so unnatural, given the structure of the theory, that it would surely require the kind of explanation that the designer hypothesis offers. I think an honest man will agree that postulating sharp peaks in the probability distribution amounts to sweeping the problem under the rug.


But the language of the above paragraph is somewhat imprecise and I think we can do better than this. First I will sketch my own argument, and then I will try to explain the argument from an article in the European Journal for Philosophy of Science2, which is somewhat similar.



Probability distribution is necessary for measurements


Whenever a physicist puts forward a theory, it might contain some parameters that aren't there from the start, but have to be measured experimentally. Consider Newton's theory of gravity. According to Newton, two massive bodies (under certain conditions) will attract each other with a "force" equal to F = G*Mm/r^2. The constant G was not predicted by Newton - it had to be measured experimentally. (There is really no way to guess the value of G without looking at the world.) Suppose the necessary experiment was performed and the experimenters' observations match the predictions of Newton's theory with G = 6.7x10^-11 SI units. Should we then conclude that Newton's theory is true and that G = 6.7x10^-11 SI units?


It might seem obvious to you that the answer is yes. But (leaving aside the question of whether we should accept Newton's theory as true) I claim that we are justified to accept the measured value of the G constant only if we assumed something about its prior probability distribution.


The reason for this interesting conclusion is that scientific experiments are of a probabilistic nature - in the following sense. In the usual cases, we have all reasons to suppose that the experimenters did not make any serious mistakes and the conclusions were drawn correctly. Nevertheless, we do not have absolute certainty - the most that a particular measurement can accomplish is to increase the plausibility of a certain fact. (Here is a famous situation where it actually made more sense to suppose that experimenters had made a mistake.) Although sometimes this increase is so significant that a single experiment can settle the matter, it can never become an absolute proof of the purported fact. Since experiments, at best, merely increase the plausibility of a certain fact, the final plausibility will necessarily depend on the initial (prior) plausibility, according to Bayes' theorem. (Even if two physicists attribute different prior probability to a certain theory, they often agree on the conclusions – this is because experimental evidence is sufficiently powerful.)


Since experiments have this probabilistic nature, our interpretation of the experiment that measured G will depend on our presuppositions about the probability distribution of G. For suppose, for example, that our prior probability distribution is heavily concentrated around G ~ 10^-5, while G ~ 6.7x10^-11 we assume to be extremely improbable a priori. After the experiment, then, we should conclude that the experimenters probably made a mistake – for any experiment, there exists a prior distribution which makes such an explanation much more plausible than the alternative! But this is clearly absurd. We see, therefore, that in order to perform meaningful measurements of the kind we are discussing, we have to assume a sufficiently slowly-varying probability distribution on the space of physical constants.


The point of all this is that no-one can escape the necessity of establishing some kind of probability distribution on the space of physical constants. Therefore, those who object to the fine-tuning argument on the grounds that there is no such probability distribution challenge the foundations of science! This is also one of the conclusions of the article I will discuss in the next section, although for slightly different reasons.



Probability distribution is necessary for a theory to be considered scientific


The author of this article goes even further, arguing that a probability distribution over the space of constants is needed in order to assess the validity of the theory itself. Below is a summary of his argument (emphasis mine):


"A physical theory, to be testable, must be sufficiently well-defined as to allow probabilities of data (likelihoods) to be calculated, at least in principle. Otherwise, the theory cannot tell us what data we should expect to observe, and so cannot connect with the physical universe. If the theory contains free parameters, then since the prior probability distribution of the free parameter is a necessary ingredient in calculating the likelihood of the data, the theory must justify a prior. In summary, a theory whose likelihoods are rendered undefined by untamed infinities simply fails to be testable. In essence, it fails to be a physical theory at all."[2]


Let me explain it in my own words. When someone puts forward a physical theory that predicts new data, the predictions are given, by necessity, in a probabilistic manner. (For example, the Higgs boson theory predicted that the LHC will likely observe a resonance near 120 GeV) Then the correspondence between experimental values and predicted values constitutes experimental evidence for the theory (experiment being the epitome of "verification"), where the strength of the evidence depends on how much likelihood did the theory give to this particular data.


The point is, the theory cannot produce empirically accessible statements – statements about likelihoods of the data - if it is completely agnostic about the probability distribution of its undetermined constants. Note that these are the conditions for a viable, testable physical theory. We haven't yet raised the matter of fine-tuning!


In conclusion, the article shows that, technically, any fundamental theory needs to specify a distribution on the space of its undetermined constants. This is usually not required in practice, because the details of this distribution do not affect the final assessment of experimental data as long as we are talking about an "honest", reasonably slowly-varying distribution. Therefore, if we want the science of physics to stand firm, we have to admit that it is justified to assume the existence of a particular ("sufficiently smooth") probability distribution, and in consequence, it is justified to make the inference from:


The values of fundamental constants lie within a small range that allows for the development of any organized life.


to


Given pure chance, it is extremely improbable that the values of fundamental constants would lie within a small range that allows for the development of any organized life.


One must therefore wonder why organized life exists at all, despite the sheer improbability of such an outcome given pure chance. The only two explanations I am aware of are the cosmic designer hypothesis and the multiverse theory, according to which our universe is but one of billions upon billions of other worlds, each with a slightly different physical constants and initial conditions. Which of these solutions is more viable? This question, I regret, must be left for another occasion.

  1. Above all, the Higgs coupling constant has this property - see the article below.

  2. L. A. Barnes, Fine-tuning in the context of Bayesian theory testing, European Journal for Philosophy of Science, vol. 8 no. 2 (2018).


Original Article


Jakub Supeł is a PhD student in Cosmology and Field Theory

at the University of Cambridge. Outside from his studies, he is interested in philosophy, apologetics, art and sports.



Do you blog? Would you like an article featured? Get in touch.


We do this for free but websites and podcasts cost money, can you help?


33 views

Support

Do you like our content? Please consider supporting what we do financially. (click)

  • Instagram
  • Facebook
  • Twitter
  • YouTube

©2020 by Critical Witness. Proudly created with Wix.com