Bertuzzi

  • ****
  • 8713 Posts
  • Check out my new blog!
    • View Profile
    • Capturing Christianity
DEBATE TOPIC: Fine-tuning Constitutes Evidence for Theism over Naturalism

This is a debate between LukeB and cnearing. I will be moderating this debate. LukeB will be taking the affirmative, cnearing the negative.

Opening Affirmative Statement: LukeB - 2000 words max
Opening Negative Statement/Response: cnearing - within 7 days - 2000 words max
LukeB's 1st Rebuttal: within 7 days - 1750 words max
cnearing's 1st Rebuttal: within 7 days - 1750 words max
LukeB's 2nd Rebuttal: within 7 days - 1500 words max
cnearing's 2nd Rebuttal: within 7 days - 1500 words max
LukeB's Final Statement: within 7 days - 1250 words max
cnearing's Final Statement: within 7 days - 1250 words max

** Unless you are LukeB or cnearing, do not post in this thread **

Luke recommends listening to this song in the background while reading.
« Last Edit: November 06, 2016, 06:26:35 pm by Bertuzzi »
Husband. Father. Photographer. Blogger.

capturingchristianity.com

"No theodicy without eschatology." - Hick

1

LukeB

  • **
  • 6 Posts
    • View Profile
Theism, Naturalism and Fine-Tuning
« Reply #1 on: November 13, 2016, 09:18:20 pm »
Theism, Naturalism and Fine-Tuning

I'm comparing theism and naturalism. I take naturalism to be the claim that the natural world is all that exists. I take theism to be the claim that there exists a perfectly good, maximally powerful being that exists necessarily (the reason for its existence is found in its own nature, not an external source). This is not a debate about the coherence or prior probability of these ideas.

We need to consider the consequences of these ideas. Keep in mind that "what does X imply?" is a different question to "what do advocates of X believe?". Ideas have implications, independently of what their advocates believe.

What is Fine-Tuning?

First things first: ‘Fine-tuning’ is a metaphor, one that brings to mind an old radio set. This metaphor unfortunately involves a guiding hand that sets the dials, giving the impression that 'fine-tuned' means cleverly arranged or made for a purpose by a fine-tuner. Whether such a fine-tuner of our Universe exists or not, this is not the sense in which I use the term.

Fine-tuning is a technical term borrowed from physics. It refers to a suspiciously precise assumption. To illustrate, suppose that a bank vault was robbed. The armoured door was opened without force; the robbers used the access code. The police arrive on the scene.

Drebin: Maybe they guessed the code.
Hocken: No way, Frank. There are a trillion combinations. The system shows that they entered the code correctly on the first attempt. Surely the odds against that are astronomical.
Drebbin: But it's still possible, right?

Here is one way to see the problem with Drebin’s theory. To explain the data, we need to fine-tune the theory by assuming that the code that the robbers guessed was the correct code. That is a suspiciously precise but totally unmotivated assumption, which tells against Debbin's theory.

(To forestall a common reply, this is an illustration, not an analogy. I'm not saying that the universe is analogous to a crime scene. I'm illustrating a principle.)

The Set-Up

How do we test theism and naturalism? I'll treat them as "theories" in the context of Bayesian theory testing. We want to know their probability, given everything we know. We use Bayes theorem: break what we know into two pieces, A and B. For each theory T, calculate the probability of A, given T and B (the likelihood). Also, calculate the probability of T on B alone (the prior). Bayes theorem often comes attached to a narrative about "evidence" and "background information" and "updating", but none of this is essential. It's a theorem. It works for any propositions. We divide into A and B for convenience.

Naturalism and theism are ultimate or fundamental statements about the universe - if they are true, then there are no "deeper" statements, so to speak. We can and should, then, ask what kind of universe we would expect. That is, what is the likelihood of this universe on naturalism and theism?

Now, that's a rather large question. The first thing to consider is: what universes other than this one are possible? When we use a physical theory to predict, say, a reading on a meter attached to our experimental apparatus, the likelihood is defined against the backdrop of all the possible readings, even though only one may actually be observed. This is necessary: the likelihood is normalised over this set of possibilities.

On naturalism, the existence and fundamental properties of the universe are brute facts. There is no reason why this particular universe exists, or indeed why any universe exists. If you think that this renders our search for probabilities impossible, then naturalism is inscrutable. There is literally nothing at all that could possibly said in support of it. The success of science would not lend it one iota of support.

But this problem is familiar to the physicist. When faced with a seemingly infinite set of possible theories or observations or whatever, we get modelling! Focus on a subset of the problem that a) we can handle, and b) seems to represent the larger problem in an unbiased way. We consider a finite number of theories, or a suitably broad model of our instrument and what it may observe.

So, here is a proposal. Rather than consider every possible way that a physical universe could be, we will restrict our attention to a well-characterised, as-best-we-know unbiased set of possibilities, generated in two ways.

1. We vary the values of the constants in the equations of the laws of nature. This has several advantages. Since the equation is familiar, we are more likely to be able to predict what would happen in a universe with the different value of the constant. Further, when testing physical theories, we need to posit a prior probability distribution of the free parameters of the theory. The constants are treated as "nuisance parameters" for theory testing.

2. Similarly, we can vary the initial conditions of the same equations. Roughly, physical theories tell us how the world changes, but not how it is. For example, Newtonian gravity tells us how masses pull on each other; to describe the Solar System, we need to know where the planets are and how they're moving. We specify this with initial conditions (or, more generally, boundary conditions). The set of possible initial conditions maps precisely onto the set of physical scenarios that the theory dictates is possible.

This subset of possibilities has much to recommend itself to us. It stays very close to our best scientific theories of the universe. It allows to use familiar results and methods from our physics. Furthermore, looking ahead, we have not biased our search against finding life. In fact, if anything, we have biased out search in favour of finding life-permitting universes. The reason is that we have started our search at a known example (our universe) of what we are looking for (life-permitting universes).

Additionally, this set of possibilities comes with probability measures derived from theory testing in physics. These probabilities are a measure of our state of knowledge, not a statement about any supposed chancy-ness in reality.

What about the possibilities on theism? God can create anything. Perhaps surprisingly, this is similar to the situation given naturalism, where because the universe exists for no reason at all, anything could exist. I propose, then, that we consider the same range of possible universes. The difference will be the probabilities placed on the set of possibilities. Here, we ignore the question of why a physical universe exists at all on naturalism and theism. (That's the contingency argument.)

Testing Naturalism and Theism

In all its detail, the Bayesian approach says we need to calculate the probability of everything we know about this universe (the likelihood). The key, in practice, is to focus on decisive pieces of data. When we compare Newtonian gravity with Einstein's general relativity, we don't bother comparing the data about the orbit of Neptune, where the theories make extremely similar predictions. We certainly don't calculate the probability of every crater on the moon. Feel free to try, but the likelihood is going to be the same, so there is no net effect on the probability of the theory. Instead, we consider the orbit of Mercury, where there is a measurable difference between the predictions.

The fine-tuning argument invites us to consider this particular fact about our universe: it supports the complexity required by life forms. Over the last 40 years, physicists have found that the necessary conditions for life are extremely rare among the set of possible universe we are considering.

Here's just one example. Einstein's cosmological constant causes the expansion of the universe to accelerate. Other forms of energy can have the same effect, so we speak of their combined effect as an effective cosmological constant (ECC). Within our equations, ECC can range between -mPl and +mPl, where mPl is the Planck mass. (In this case, it is not just a possible value: there are reasons to expect the Planck scale to be the ECC's natural scale. But put that aside for now.) Within this range, we can quite safely identify a subset outside of which life will not form. If ECC < -10-90 mPl, then the universe recollapses into a big crunch in a matter of seconds. If ECC > 10-90 mPl, then absolutely no structure forms in the universe at all. The expansion is too rapid, and soon every proton in the universe is isolated in empty space.

So, in Bayesian fashion, we represent our state of knowledge with a probability. If all we knew was that a) naturalism is true and b) that the equations of physics describe the universe, we would be ignorant of the value of the ECC. In probability jargon, naturalism is a non-informative theory. This isn't pejorative: some theories don't make precise predictions. If all we knew was that robbers guessed the code, we'd have no reason to expect any particular code to be entered rather than another. Similarly, naturalism gives us no reason to expect any possible physical universe rather than another. So, with respect to the ECC, we model our ignorance with a uniform probability distribution. Then, the probability of a life-permitting universe is no greater than one in 1090. Other examples of fine-tuning reduce this number even further.

What about on theism? Here, we must ask how likely it is that God would want to create a universe that supports embodied moral agents. Here, God's essential goodness is relevant. A good God might want to create a good state of affairs, and the existence of beings who can can live, learn, labour and love is a good state of affairs.

A cynic might complain that this "divine mind-reading" is entirely speculative. But observe two things. Firstly, such "speculation" pre-dates the discovery of the fine-tuning of our universe by a couple of millennia, so one can hardly accuse the theist of ad-hocery.

Secondly, consider the crime scene again. There are a seeming infinity of means, motives and methods that come under the umbrella of the "inside job" hypothesis. Must we read the minds of the robbers? Actually, given the extremely small probability of the data on the "guessed" hypothesis, we need only argue as follows. Only if the probability of an inside job is comparable to one in a trillion (the likelihood on the "guessed" hypothesis) will such considerations make any difference to the investigation. Similarly, the force of the fine-tuning argument is only turned back by a "divine mind-reading" skepticism if the probability of God creating a life-permitting universe is comparable to one in 1090. That, I contend, is not much of a burden on the theist.

Thus, even if the probability of a life-permitting universe given theism is one in a trillion, the fine-tuning of the universe for life still confirms theism over naturalism by some enormous number like 1078. That's how we test theories, and naturalism takes a rather large hit.

Conclusion

The intuition behind this argument is that the naturalist, faced with all the ways the physical universe could have been, can only shrug their shoulders. For naturalism, the question "why this universe?" is in principle unanswerable. The theist can see a rationale for the way the universe is, one that is neither ad hoc nor jerry rigged. Note that these are not really rival explanations. Theism offers an explanation for the ultimate facts of physics where naturalism offers none.

There are a large number of possible moves for the naturalist to make at this point, so I'll discuss those as they arise. If the details of fine-tuning are in question, I've got a whole book of more examples: www.cambridge.org/fortunate. If you think that fine-tuning is a theist invention, then you should know that my co-author is an atheist.

2

cnearing

  • ***
  • 2677 Posts
    • View Profile
Re: DEBATE: Fine-tuning Constitutes Evidence for Theism over Naturalism
« Reply #2 on: November 18, 2016, 09:53:41 pm »
Well, first I would like to thank Dr. Barnes for agreeing to have this discussion with me, and for offering such a well-thought-out overview of what fine tuning is.  In a subject dominated by misinformation, that certainly was a breath of fresh air, and I am glad that I finally took the time to seek him out as an interlocutor. 

Fine Tuning and Bayesian Inference

I approve of Barnes’s move to apply the logic of Bayes’s theorem to the question at hand, and his overview of how Bayes functions is essentially correct.  However, there is one place where his illustration needs to be filled in a little more clearly.  Barnes uses the example of two detectives hypothesizing about how a robber might have bypassed a security system.  One detective suggests that perhaps the robber simply guessed the code.  In response, Barnes writes:

Quote
“Here is one way to see the problem with Drebin’s theory. To explain the data, we need to fine-tune the theory by assuming that the code that the robbers guessed was the correct code.”

While this is not inaccurate, it does elide an important detail: specifically, that there are two different hypotheses in question, here.  The first hypothesis is that the robber simply took one guess at the code.  The second hypothesis is that the robber took one guess at the code and guessed the correct one.

Spot the difference?  The first hypothesis specifies only a guess.  The second hypothesis specifies the guess and that the guess was correct.  The first hypothesis is not finely tuned: rather, the problem with the first hypothesis is that the likelihood of the outcome (getting the right code) is very low, given this hypothesis.  The second hypothesis is finely tuned, but note that the likelihood of the outcome, given this second hypothesis, is actually one, since the outcome is actually specified in the hypothesis itself. 

The problem with the first hypothesis is that it offers a low likelihood for the observation in question.  The second hypothesis does not suffer from this problem.  Indeed, it offers a very high likelihood for the observation in question.  The problem with the second hypothesis is that, due to its very high degree of specification, we are inclined (for good reason) to assign it a very low prior.

Setting up the Test

Though the approach Barnes takes in his set-up is pretty reasonable, he makes two errors in this section which need to be addressed. First, Barnes writes:

Quote
“On naturalism, the existence and fundamental properties of the universe are brute facts.”

This is not actually true.  Though naturalism is consistent with the proposition that the existence and fundamental properties of the universe are brute facts, it neither stipulates nor entails this as a fact.  Indeed, naturalism is consistent a wide range of theories on which the universe is the product of some other system: a previous universe, a causal multiverse, or a universe “generator,” just to toss off a few.  This entire range of possibilities needs to be accounted for if one is to evaluate, as Barnes attempts to do, the likelihood of fine tuning (or anything else) on naturalism.  This misrepresentation alone constitutes a fatal flaw in Barnes’s argument.

Somewhat more concerning though is this: if we look specifically at the hypothesis that the existence and fundamental properties of the universe are brute facts (call this hypothesis B) the likelihood that the universe will exist and have those properties, given B, is actually 1.  This hypothesis is actually very similar to the hypothesis in the illustration above, wherein we specify that the robber has guessed the correct code.  It is a very finely tuned hypothesis, and so we may reasonably consider its prior to be very low, but if this were in fact what naturalism states, then the argument could be ended right here:

P(fine-tuning|B) = 1

P(fine-tuning|theism) < 1

Ergo, fine tuning is actually evidence for B—the naturalistic “brute fact” hypothesis—over fine tuning. 
It is only because naturalism actually includes all of these other hypotheses that we have a discussion on our hands at all.

Testing the hypotheses

The first thing to note, here, is that Barnes makes a sudden and important jump when he writes:

Quote
”The fine-tuning argument invites us to consider this particular fact about our universe: it supports the complexity required by life forms.”

This is critical, because as his actual argument unfolds, it becomes clear that the observation that Barnes is offering in support of theism is not actually “fine tuning” at all.  Instead, it is the observation above: the fact that the universe supports life.  The role fine tuning plays in his argument is actually found where he attempts to evaluate the likelihood of a life-supporting-universe (LSU) given naturalism.  Set aside for the moment that he has already represented naturalism as a hypothesis on which this likelihood must be 1, and look at what he goes on to write, here:

Quote
“Over the last 40 years, physicists have found that the necessary conditions for life are extremely rare among the set of possible universe we are considering.”

Only this is actually a dramatic understatement.  Recall the space of universes he wants us to consider, established by:


Quote
“1. We vary the values of the constants in the equations of the laws of nature.“
and
Quote
“2. Similarly, we can vary the initial conditions of the same equations.”

For the universe to be life permitting, each of those terms he suggests we vary must fall within a particular finite window, but the range of values each of those terms can take on is not finite.  Each of those ranges is, in fact, infinite.  The conditions we need for life are not just extremely rare, but, in fact, infinitely rare, against the space he has picked out, here.  When Barnes goes on to suggest a uniform distribution* over this space, writing:

 
Quote
“So, with respect to the ECC, we model our ignorance with a uniform probability distribution.”

He commits yet another fatal misstep.  We can’t model our ignorance in this case with a uniform probability distribution.  Uniform distributions can only cover finite spaces—they are literally defined by their boundaries—and the space of universes he wants us to consider is infinite.  His suggestion here is mathematically incoherent.  We could correct this problem in theory by simply putting boundaries on the range of values that each of those values could take on, but these boundaries would be inherently arbitrary, and the likelihood of interest, the probability of picking a life-supporting universe out of this uniformly distributed space of possible universes, would vary proportionally with those arbitrary choices.  We would, in effect, be picking that critical likelihood arbitrarily.

If Barnes had produced a coherent hypothesis to evaluate by specifying those boundaries, what he would have been doing, in effect, is hypothesizing a universe-generating process whose universe-generating behavior is described the resulting distribution.  But, of course, it would only be one of infinitely many similar hypotheses which we could generate simply by varying those arbitrary boundaries.  Not only would Barnes have to specify boundaries for his space, he would have to develop a method for evaluating the aggregate likelihood across an infinite set of hypotheses differentiated only by different arbitrary boundary choices.

And that’s just for the uniform distributions.  There are any number of other distributions we could use to describe possible naturalistic universe-generating processes which would have to be accounted for as well.

Barnes’s attempt to “model our ignorance with a uniform probability distribution,” on top of relying on an incoherent model, is insufficient for the task at hand, and thus we can dismiss his argument without even really looking at his (very brief) discussion of the likelihood of LSU on Theism.

A better approach

If Barnes’s attempt to evaluate the relevant likelihoods is insufficient, what would be a better approach?  Consider this, instead:

Let us agree with Barnes that it is meaningful to talk about the probability of LSU given Theism: P(LSU|T), even though we (like Barnes) are not going to try to actually pin a numerical value to this term.

There is a vast—infinite, really—range of naturalistic hypotheses, and the likelihoods they offer for LSU vary between 0 and 1 (recall B, the naturalistic “brute fact” hypothesis: P(LSU|B) = 1).  This means that we can pick out a subset of naturalistic hypotheses, N’, defined as the set of all naturalistic hypothesis, n, such that the probability of LSU given n is greater than or equal to the probability of LSU given Theism.

N’ := {n | P(LSU|n) > P(LSU|T)}

In addition, let’s call the set of all remaining naturalistic hypotheses N’’.

N’’ := {n | P(LSU|n) < P(LSU|T)}

Now, trivially, LSU is evidence for N’ over T.  Just by the way these sets are defined, P(LSU|N’) > P(LSU|T).  We even know that this relationship indeed a “strictly greater than” relationship rather than a “greater than or equal to” relationship because we know that N’ contains at least one hypothesis which offers a likelihood for LSU which is greater than P(LSU|T).  Specifically, B offers a likelihood of 1, while P(LSU|T) must be less than one, since it is possible for God to create a universe which does not sustain life.

The question of how to aggregate P(LSU|N’) and P(LSU|N’’) into a single P(LSU|N) remains open, and this is essentially what Barnes has to address if he wants to salvage his argument, but what we can see quite clearly that we can select a space of naturalistic hypotheses which actually are supported over theism by the evidence in question.
 
This is important because the comparison Barnes makes in his argument is actually a deeply problematic one, from a theoretical perspective, and an unfair one, from the perspective of the debate.  Though we had initially agreed to discuss whether fine tuning constitutes evidence for theism over naturalism, what Barnes actually attempts to argue is that a life supporting universe is evidence for theism over naturalism.  Though he correctly points out that,

Quote
“…such "speculation" pre-dates the discovery of the fine-tuning of our universe by a couple of millennia, so one can hardly accuse the theist of ad-hocery,”

Recall that fine tuning is not actually the evidence he calls upon in his argument.

When it comes to fine tuning, theism is precisely as uninformative as naturalism.  This is why Barnes instead uses a different observation altogether as the evidence in his argument: LSU.  Theism is informative relative to the fact that the universe permits life, but, of course, Theism certainly does not predate this “discovery” at all.  What Barnes has done is compare a perfectly post-hoc hypothesis to a vast space of hypotheses, and this is simply not an appropriate comparison.  If we allow the naturalistic space a similar post-hoc winnowing, then LSU actually becomes evidence for the refined naturalistic space over theism. 

Conclusion

The argument presented by Barnes suffers from several fatal flaws: the misrepresentation of the space of naturalistic hypotheses, the incorrect evaluation of the likelihood of a life-supporting universe on the naturalistic “brute fact” hypothesis, and the use of an incoherent “distribution” which, even were it coherent, would still be a woefully insufficient substitute for the entire space of naturalistic hypotheses.  Most importantly, though, he has replaced “fine tuning” with the observation that the universe supports life as the evidence on which he rests his argument, and in doing so he has left his favored hypothesis, theism, suffering quite badly from the past evidence problem: if we consider a similarly post-hoc space of naturalistic hypotheses, we see that that the fact that our universe supports life completely fails to constitute evidence for theism over naturalism. 
« Last Edit: November 20, 2016, 08:46:56 pm by cnearing »
P((A => B), A) = P(A => B) + P(A) - 1

3

LukeB

  • **
  • 6 Posts
    • View Profile
Re: DEBATE: Fine-tuning Constitutes Evidence for Theism over Naturalism
« Reply #3 on: November 23, 2016, 07:31:09 pm »
Inevitably, we'll need to sort through our misunderstandings of each other. That's why debates are iterative. Stick with us.

The Universe

When I say that “On naturalism, the existence and fundamental properties of the universe are brute facts", by "universe" I mean the entirety of physical reality. It would include any physical multiverse or universe generator. We'll see whether this is a minor confusion of nomenclature or a "fatal" "misrepresentation".

Watch the Posterior, Matron

Consider Drebin's theory: the robber guessed the code. We are told that there two different hypotheses:
  • H1) that the robber took one guess at the code
  • H2) that the robber took one guess at the code, and entered the correct code.

As far as Bayes theorem is concerned, these are the same hypothesis. Let X be a hypothesis and A and B be two things that we know. Now, consider the composite hypothesis Y = XA ("X and A are both true"). Then,

p(Y|AB) = p(XA|AB) = p(A|XAB) p(X|AB) = p(X|AB) (since p(A|XAB) = 1).

Their posterior probabilities are identical. Always. Even though the hypothesis Y has a perfect likelihood: p(A|YB) = 1.

So there aren't really two hypotheses. Look at the addition to the second hypothesis: "they entered the correct code." That's data (D). We know that that's true. Since H2 = D H1, p(H2|DB) = p(H1|DB). The fact that H2 "offers a very high likelihood for the observation in question" is irrelevant. The real likelihood p(D | H1 B) remains as much a part of the problem as before.

This is not merely being inclined to give H2 a low prior. This is not just nit-picking. If two theories X and Y = XA are treated differently, watch out!

Brutishness

Considering my definition of naturalism, we are told to consider "the hypothesis that the existence and fundamental properties of the universe are brute facts (B) ... This hypothesis is actually very similar to the hypothesis in the illustration above", which I have called H2.

Watch out! As shown above, H1 and H2 are probabilistically identical, so any attempt to treat them differently must be in error.

We are told that, on this hypothesis, "the likelihood that the universe will exist and have those properties, given B, is actually 1." If it is supposed that B includes all known properties of the universe, then this is correct. But this is exactly the pointless move noted above: just superglue the data into your theory and rejoice at the lovely likelihood. Nothing important follows from this triviality. The probability of naturalism simpliciter, stripped of any smuggled data about which brute fact universe exists, is untouched.

The Evidence of Fine-Tuning

Am I being "problematic" and "unfair" because "Though we had initially agreed to discuss whether fine tuning constitutes evidence for theism over naturalism, what Barnes actually attempts to argue is that a life supporting universe is evidence for theism over naturalism"?

On the contrary, we don't always calculate the likelihood of the evidence we discover. The important thing is that it goes into the posterior as given, not where it appears in Bayes theorem.

Suppose our detectives initially see footage of the robber entering the bank vault after using the keypad once. They don't know if the vault was locked, or if the keypad worked, or how many digits were required. What is the likelihood of the robber entering the vault, given that the code was guessed? Perhaps not too small: maybe only 3 digits are needed, maybe the keypad was broken, maybe the vault wasn't locked.

But later, more evidence: the lock was operational and requires a 12 digit code. Obviously, this is evidence against the "guessed" theory. But we don't "update" the posterior by calculating the likelihood of the new evidence. That's a dead end. Instead, we recalculate the posterior probability, using this new information as background information. Given 12 required digits, the likelihood of a correct guess is 1 in a trillion. This is perfectly legitimate: there is nothing in Bayes theorem that requires us to apply it chronologically. It's an identity.

Similarly, fine-tuning is a discovery in theoretical physics. It goes into the posterior, but don't calculate the likelihood of fine-tuning. We calculate the likelihood of data. Fine-tuning is evidence for theism because it shows us that something we already knew - this universe permits life - is much more unlikely on naturalism than we previously might have thought.

Infinity

We are told that the range of the constants is infinite, and that any boundary would be "inherently arbitrary". Both claims are false.

Firstly, the objection proves too much. Consider the likelihood of some data D given some theory T with free parameter x (with background information B). We must integrate over the free parameter using a continuous version of the law of total probability. This requires a prior probability distribution of the parameter x, p(x|TB),

p(D|TB) = integral p(D|xTB) p(x|TB) dx

We can apply this equation to D = L, the data that our universe permits life. Now, the objection is trying to argue that an infinite range for a makes p(x|TB) undefined, which makes p(L|TB) undefined. But, for the same reason, p(D|TB) is undefined for any D. We can't calculate the probability of a life-permitting universe because we can't calculate the likelihood of anything. The objection would sink fine-tuning by sinking all the associated theories of physics.

Secondly, to measure the value of the constant, we calculate its posterior probability distribution using a continuous version of Bayes theorem:

p(x|DTB) = p(D|xTB) p(x|TB)  /  integral p(D|xTB) p(x|TB) dx

There's p(x|TB) again. If infinities scupper that prior, then we cannot infer the values of these constants from experiments. So, both theoretical and experimental physics are screwed over by this objection.

Thirdly, it undermines many arguments for naturalism. For example, suppose someone argues that the non-answering of prayer is evidence of God's absence. How likely are these observations, given naturalism? There are naturalistic universes in which it seems like a divine being is answering prayer - by coincidence, everyone who prays for healing recovers. The naturalist needs to argue that such universes, while possible, are unlikely on naturalism. That is, if naturalism were true, we'd expect prayers to seem to go unanswered.

But if an infinite range of possibilities scuppers all attempts to argue what is likely or not on some theory, then naturalism predicts nothing of the sort. As I said in my opening, without some way of determining what facts are likely to be true on naturalism, nothing at all could possibly said in support of it.

Finally, despite all this, physics goes on. This is not a unique problem for fine-tuning but a familiar exercise in probability. Remember my opening: faced with a problem we can't handle, focus on a subset of the problem that a) we can handle, and b) seems to represent the larger problem in an unbiased way.

I provided a finite range: "ECC can range between -mPl and +mPl, where mPl is the Planck mass." Look closely at the prior we need: p(x|TB). B, in my formulation, does not tell us about the actual universe, and so is no help. T is the theory in which x lives. And so, to test any theory with free parameters by calculating the likelihood of any data, the theory needs to justify a prior over its free parameters.

For the cosmological constant, our theories (QFT + GR) dictate the Planckian upper and lower bounds. The Planck scale is where quantum gravity effects cannot be ignored. Thus, not having a theory of quantum gravity, we cannot extend the parameter outside this range in our theory. The theory limits the range and so justifies the prior. This is not "inherently arbitrary" but bog standard.

A Hypothetical Generator

We are told that arbitrarily providing boundaries would be tantamount to "hypothesizing a universe-generating process". This confuses credences and chances. If I say "the credence of heads is 1/2", I mean that my degree of belief that the outcome will be heads is 1/2, on a scale from 0 to 1. If I say "the chance of heads is 1/2" I mean that the coin flipping apparatus in question has the property of tending to produce heads on 1/2 of all flips.

I'm dealing in credences. I am not postulating chances.

Again, if this argument were successful, no physical theory with free parameters could ever be tested (Bayesian-wise). To do that, we need a prior over the free parameter. But supposedly, that requires a hypothesised universe generator. But there are an infinite number of possible generators, so we'll need priors for those as well. Which requires yet more generators. The case is hopeless, so we can't test anything.

You may have noticed an interesting trend: to avoid fine-tuning, the naturalist is quite happy to throw physics under the bus.

Naturalistic Ranges

Supposedly, because naturalism includes a range of hypotheses, they must all be catalogued and aggregated individually: "The question of how to aggregate P(LSU|N’) and P(LSU|N’’) into a single P(LSU|N) remains open".

Not at all. I am under no obligation to partition any hypothesis. If I don't need to use the law of total probability to calculate a particular probability, then I won't.

Yet again, this strategy cannot be consistently followed in any realistic situation. There are an infinite number of "guessed the code" hypotheses, whose likelihoods vary between zero and one. Must I subdivide? Do I consider every single possible detail - hair, clothing, temperature, wind speed, ...? Or should I do what any Bayesian does in this situation, and infer the small likelihood from the proposition "they guessed the code" alone?

I've given an argument for p(L|NB). I don't have to aggregate anything. Inventing a partition and then claiming that I am really comparing "a ... hypothesis to a vast space of hypotheses, and this is simply not an appropriate comparison" is an all-purpose probability avoider. I could pull the same trick for any hypothesis to save it from closer examination: just keep demanding more details, dividing the hypothesis space, and insisting on a justification of the infinite aggregation. Once again, under the bus.

4

cnearing

  • ***
  • 2677 Posts
    • View Profile
Re: DEBATE: Fine-tuning Constitutes Evidence for Theism over Naturalism
« Reply #4 on: December 02, 2016, 10:25:39 pm »
In the interest of avoiding misunderstandings, I want to take a moment and try to extract and carefully lay out the thing that Barnes has been calling naturalism throughout his argument.  I'm going to call it:

Luke Barnes's "Naturalism" - (LBN)

In his most recent post, Barnes wrote:

Quote
When I say that “On naturalism, the existence and fundamental properties of the universe are brute facts", by "universe" I mean the entirety of physical reality. It would include any physical multiverse or universe generator.

This is good to know.  It's not the definition I typically use, but Barnes is welcome to choose the terms for his own argument.  Recall, however, that Barnes also wrote, in his opening,

Quote
On naturalism, the existence and fundamental properties of the universe are brute facts.

Taking these statements together, then, we can see that LBN is the hypothesis that the existence and fundamental properties of all of physical reality are brute facts. 

Recall, too, that Barnes lays out for us what he considers to be the space of possible "universes" on LBN: those described by varying "the values of the constants in the equations of the laws of nature" and "the initial conditions of the same equations."

In total, then, LBN is the hypothesis that all of physical reality is described, at a fundamental level, by the known laws of nature, the values of the constants in the equations forms of those laws, and that all of these facts are brute facts.  Further, Barnes does not want to "smuggle in" any data that we have about these values, so he is going to "represent his ignorance" by further describing the range of possible values for each of these brute facts with a uniform distribution.

Barnes then goes on to point to the observation that the universe permits life, and notes that, on LBN, it is extremely unlikely that the universe would support life (forget for a moment that this calculation is actually impossible given his choice of distribution, we'll come back to that later) and that, therefore LSU is evidence against LBN.  In fact, he says, it is such strong evidence against LBN that it is evidence for theism over LBN.

I completely agree.

LBN is a rubbish hypothesis.  I know of no-one who takes it seriously.  The problem, of course, is that, despite Barnes's unfortunate choice of names, it is not naturalism.

We can prove this fairly easily: recall that Barnes suggests that, on LBN, ECC is a fundamental descriptor of all physical reality.  If we were to find, then, that ECC is derivable from something more fundamental, we would have to reject LBN.  Take, for instance, this paper, https://cds.cern.ch/record/485959/files/0102033.pdf, in which Moshe Carmeli and Tanya Kuzmenko derive the value of ECC from what they propose is a more fundamental (much less finely tuned) model.  If we accepted this conclusion, we would have to reject LBN, but naturalism would remain untouched.  In fact, this proposal by Carmeli and Kuzmenko is perfectly consistent with naturalism, despite being contradictory to LBN. 

Naturalism is, as I pointed out in my opening, a vast space of hypotheses, including many in which the observable universe (which is what these equations and their constants and their boundary conditions actually describe) is not the entirety of physical reality, or in which our current physical theories and cosmological models are not fundmental (as Carmeli and Kuzmenko and too many other physicists to count suggest).  Barnes hasn't argued against naturalism at all.  He hasn't even attempted to actually evaluate the likelihood of a life-supporting universe on naturalism.  He hasn't attempted to evaluate the likelihood of a life-supporting universe on theism.  Even if we overlook his math error, all he has actually done is evaluate the likelihood of a life supporting universe on one particular naturalistic hypothesis that no-one I know, in the physical sciences or outside of it, takes seriously anyway. 

So, yes.  Let's reject LBN.  One naturalistic hypothesis down, only infinitely many more to go, and no actual argument for the claim that fine tuning is evidence for theism over naturalism in sight.

Since that didn't actually require very many words, I'm going to take a moment and cover a couple of the tangential points that have come up during the debate so far.

On posterior probabilities and the identity of hypotheses

In his response, Barnes wrote:

Quote
Consider Drebin's theory: the robber guessed the code. We are told that there two different hypotheses:
H1) that the robber took one guess at the code
H2) that the robber took one guess at the code, and entered the correct code.

As far as Bayes theorem is concerned, these are the same hypothesis.

This is false.  His proof is correct: the posteriors (the probabilities of these hypotheses once conditioned on the observation that the robber entered the right code) are equal, but that doesn't entail that these hypotheses are the same.  They have different priors and offer different likelihoods for many different potential observations.  This is more than enough to distinguish between them and note, correctly, that they are indeed distinct hypotheses. 

On evidence and fine-tuning

Barnes took a lot of time here to write pretty much exactly what I wrote in my own opening post.  He has chosen to call fine tuning "evidence" even though we are not actually updating anything on it, and I would disagree with that choice, but it is a trivial semantic distinction.  The point I actually made, that Theism is entirely ad-hoc relative to the observation on which he has chosen to update in his argument, remains untouched and true.

On infinitely, likelihood, and science

In response to my charge that he had picked an incoherent reference distribution for his hypothesis, Barnes offered three responses. 

Quote
Firstly, the objection proves too much....Now, the objection is trying to argue that an infinite range for a makes p(x|TB) undefined, which makes p(L|TB) undefined. But, for the same reason, p(D|TB) is undefined for any D. We can't calculate the probability of a life-permitting universe because we can't calculate the likelihood of anything. The objection would sink fine-tuning by sinking all the associated theories of physics.

This is false.  I did not try to argue that an infinite range for "a" makes p(x|TB) undefined. 

What I argued is that the uniform distribution Barnes has picked as reference cannot be placed over an unbounded space.  I never suggested that infinities "scupper priors," as he later writes.  To understand what this means, let's explore Barnes's formula in a little more detail.

p(D|xTB) represents the likelihood of the data given our background information and the theory--described by soem reference distribution--with a particular value "slotted in" for the parameter (x) that reference distribution requires. 

In the case of Barnes's uniform distribution, we actually need two parameters: "a," a lower bound on the range of possible values that our outcome (the value of whatever constant we're looking at, for instance) can take, and "b," an upper bound on the same range.  For Barnes's argument, this term would look like:

P(D|abTB)

p(x|TB) in this equation represents a prior distribution over possible values of the unknown parameter, x.  This sort of prior distribution is used often in the process of parameter estimation.  In the case of Barnes's argument we would need two of these: one for "a" and one for "b," and they'd look like this:

p(a|T'B) p(b|T"B)

I've marked the two "T"s as T' and T" because these T does not necessarily represent the same hypothesis as T in the first term, and they don't necessarily represent the same hypothesis as each other, either.

However, note that both of these terms are also likelihoods.  In both cases, T refers to another distribution, and just like our first uniform distribution, this distribution is going to need its own parameters.  They're called hyperparameters, and odds are we're going to need two hyperparameters for T' and two more for T".  This means that these two terms would actually look like:

p(a|cdT'B) p(b|efT"B)

where c, d, e, and f are four arbitrarily selected hyperparameters.  All Barnes has done, here, is propose that we might replace the two parameters his uniform distribution requires with probably about four arbitrary hyperparmeters.  It should be obvious why this doesn't actually help. 

My objection was merely that if he wants to use a uniform distribution for his calculation, it needs to have upper and lower bounds.  That's just how uniform distributions work.  That's something that everyone learns in introductory statistics.  You literally cannot use them to calculate anything without having those two parameters (or at least two hyperparameters defining distributions over a space of possible values of those parameters, which is no better).  Barnes can't leave these parameters out and have a coherent argument. Barnes can't pick these parameters arbitrarily, or his argument becomes arbitrary.  His first response doesn't actually address this objection at all.

Perhaps more concerningly, Barnes seems to think that my objection would "sink fine-tuning by sinking all the associated theories of physics."  This is not just false, but absurd.  Even the objection he mistook me for offering wouldn't accomplish that.  There are plenty of distributions that can be integrated over an infinite range--the uniform distribution just isn't one of them.  Most physicists seem to know enough to pick distributions such that they can actually perform the integrations they need to perform for their likelihood (or, far more frequently, p-value) calculations.  Barnes just didn't.  The problem is not with the methodology of science (that's another debate entirely) but with Barnes's poor choices, and he has no one to blame for those but himself.  Physics didn't force him to suggest we try to integrate a uniform density function with no upper or lower bounds. 

Finally, Barnes points out that he actually can choose non-arbitrary upper and lower bounds for ECC.  That's fine, but that's just one of the values on which his argument rests--and, as we've seen above, it's a value that we have good reason to believe is actually fixed by a more fundamental theory.  Most if not all of the other values in question do not have possible ranges which are bounded by theory. 

Oh, and, as a note, he's probably right that many arguments for naturalism suffer from a similar problem.  That’s fine.  We should all be willing to call out bad arguments wherever we find them, even if we happen to disagree with their conclusions.  I’m happy to abandon any arguments for naturalism that rely on incoherent reference distributions, but it’s high time that Barnes abandoned this argument for theism. 





P((A => B), A) = P(A => B) + P(A) - 1

5

LukeB

  • **
  • 6 Posts
    • View Profile
Re: DEBATE: Fine-tuning Constitutes Evidence for Theism over Naturalism
« Reply #5 on: December 06, 2016, 05:43:13 am »
Here is a fuller definition of naturalism, taken from Sean Carroll's "The Big Picture":
Quote
1.   There is only one world, the natural world.
2.   The world evolves according to unbroken patterns, the laws of nature.
3.   The only reliable way of learning about the world is by observing it.

It follows that there are no deeper principles from which the fundamental properties of the natural world are derived. Since they are also contingent, they are brute facts.

(One could propose that these principles are necessary. I'm not aware of any naturalist who does this - feel free to enlighten me.)

Note well the difference between naturalism, given by a definition like the one above, and a naturalistic hypothesis, which is (presumably) a complete description of a world in which naturalism is true. There are many naturalistic hypotheses, but I don't have to count each and every one of them in order to think about naturalism.

A Naturalism of my own

We are told that my version of naturalism, "LBN", is a "rubbish hypothesis":
Quote
... recall that Barnes suggests that, on LBN, ECC is a fundamental descriptor of all physical reality ... LBN is the hypothesis that all of physical reality is described, at a fundamental level, by the known laws of nature.

If, dear reader, you think that is what I have suggested, then I sincerely suggest you go back to the start and read my case again. That is exactly the opposite of what I said:
Quote
Rather than consider every possible way that a physical universe could be, we will restrict our attention to a well-characterised, as-best-we-know unbiased set of possibilities.

We don't have the fundamental laws of nature. If this lack of knowledge universe prevents us from knowing which kinds of universes are likely or unlikely on naturalism, then naturalism is completely untestable. No likelihoods, no posteriors.

There are philosophers who have criticised naturalism on similar grounds. Considering naturalism's cousin, materialism, Keith Ward says: "What is the point of being a materialist when we are not sure exactly what matter is?". Suppose the materialist says "only matter exists", we ask "what is matter?", and the materialist says "I don't know. We don't have a fundamental theory of the physical world". Then materialism looks like a non-starter. We can't think about it, let alone believe it, because we don't know what it means. Similarly, if our lack of knowledge about the fundamental laws of nature prevents us from knowing what physical states of affairs are possible, then - as I said in my opening - "naturalism is inscrutable".

I could just leave it at that. Theism wins because naturalism is incoherent.

Instead, I proposed a way forward, a smaller but tractable piece of the puzzle, a "subset of the problem", a model. I emphatically was not suggesting that "all of physical reality is described, at a fundamental level, by the known laws of nature". No physicist believes that. Go back and read my careful justification of a particular subset and why it "has much to recommend itself to us".

In science, a good way of showing that someone's calculation is flawed is to do a better one. If a better proposal exists for "apply[ing] the logic of Bayes’s theorem" to naturalism, then let's see it. Criticising my approach by appealing to the infinite set of naturalistic hypotheses does not lead us to conclude that naturalism is plausibly true, but that it is untestable, ill-defined and unbelievable.

So, have at it.

Literature Roulette

And now comes one of my favourite moments in debates like these: the appeal to a random technical paper that no one's heard of and that my interlocutor has not understood. I call it literature roulette.

Our current contender comes from 2001. It has about 20 citations, but curiously is not mentioned by any major review of the cosmological constant problem: Carroll (2001), Dyson, Kleban & Susskind (2002), Peebles &  Ratra (2003), Vilenkin (2003), Polchinski (2006), Albrecht et al. (2006), Copeland, Sami & Tsujikawa (2006), Durrer & Maartens (2007), Linde (2007), Padmanabhan (2007), Bousso (2008), Frieman, Turner & Huterer (2008), Martin (2012), Schellekens (2013), and more (references on request). Somehow, all these cosmologists and physicists missed the simple derivation of the ECC from Carmeli and Kuzmenko.

Why the oversight? Because Carmeli's "Cosmological Relativity" (http://adsabs.harvard.edu/abs/2000astro.ph..8352B) is junk. Calling Hubble's law a "cosmological equation of state" and then trying to derive a spacetime metric from it with velocity as a "coordinate", as if this had anything to do with General Relativity, is just nonsense. Scientists don't ignore ideas for no reason.

But it gets worse. We are told that "Carmeli and Kuzmenko derive the value of ECC from what they propose is a more fundamental (much less finely tuned) model". This would be news to Carmeli and Kuzmenko, because what they actually do is infer the value of the "cosmological constant" in their model from the observed Hubble constant. (In their model, Lambda is inferred from tau, tau from h, h from H0 and H0 from "the latest results from HST", the Hubble Space Telescope.) It's not a derivation. It's a measurement.

The point of citing Carmeli and Kuzmenko was that some have proposed that the ECC is not fundamental, but derivable from more fundamental parameters. We needn't have played literature roulette - no physicist would deny it. But I repeat my point above: nothing in my case relied on ECC being a fundamental constant of the ultimate laws of nature.

Theism and ad-hocery

Is theism an ad hoc explanation of the fact that this universe supports life? Theism does not predate our discovery that the universe supports life, but this is not sufficient. "Ad hoc" is not the same as "post hoc". We observed the perihelion shift of Mercury before Einstein proposed General Relativity, but that does not make GR an ad hoc explanation. GR naturally predicts the shift. Similarly, I have argued, theism naturally (though not inevitably) predicts a life permitting universe. Theologians were expounding this idea centuries before anyone suspected that life might be decisive against naturalism. So the charge of ad hocery fails.

The Dilemma

Quote
... he is going to "represent his ignorance" by further describing the range of possible values for each of these brute facts with a uniform distribution.

Once again: if you think that's what I did, then reread from the start. This comment comes before an extended but pointless section about the need for limits on a uniform distribution. I know that, and it shows in my opening. I first argue that,
Quote
Within our equations, ECC can range between -mPl and +mPl, where mPl is the Planck mass.

I then state that,
Quote
with respect to the ECC, we model our ignorance with a uniform probability distribution.

I did not claim that every brute fact of naturalism warrants a uniform distribution. I made that claim specifically about the ECC, and after justifying the appropriate limits.

Are we left to rue "Barnes's poor choices, and he has no one to blame for those but himself"? About my actual argument, we are told:
Quote
Barnes points out that he actually can choose non-arbitrary upper and lower bounds for ECC. That's fine, but that's just one of the values on which his argument rests. ... Most if not all of the other values in question do not have possible ranges which are bounded by theory.

So the argument that I actually made is fine. Let's all pause on that point for a moment.

Now comes the dilemma. There are other constants. Either we can justify the prior distribution for these parameters p(x|TB), or we can't.

A) If we can, then we can extend our tractable subset to include them, and calculate the degree of fine-tuning. As this places additional requirements on life-permitting universes, it can only make them less probable. The set of universes with life-permitting ECC and life-permitting electron mass is not larger than the set of universes with life-permitting ECC. (My life-permitting limits on ECC hold regardless of the value of the electron mass.) Thus, my calculation for ECC is an upper limit on the likelihood. The probability of a life-permitting universe on naturalism is *smaller* than one in 1090. So by all means, consider more constants. They cannot help.

B) If we can't, then these constants do not form part of a tractable subset. We now have two options.
i) We focus on the tractable problem, ignoring these parameters. As explained in my opening, we focus on a subset that we can handle. So we are forced to simply ignore these parameters. The calculation based on the ECC remains our best guide.

ii) We do not confine our attention to a tractable problem, and admit that naturalism cannot tell us what physical universe we would expect to exist. No likelihood, no posterior. Naturalism avoids the challenge of fine-tuning by admitting that it is incoherent, and thus cannot be rationally believed.

So, either naturalism is extraordinarily improbable, or it is incoherent.

6

cnearing

  • ***
  • 2677 Posts
    • View Profile
Re: DEBATE: Fine-tuning Constitutes Evidence for Theism over Naturalism
« Reply #6 on: December 07, 2016, 01:32:13 am »
Well, it is a relief to know that Barnes does understand that we lack a fundamental theory of physics and does understand how uniform distributions work.  I could pick at why his earlier posts clearly indicate the opposite, but it would serve little purpose.  The crux of the debate remains clear.

Barnes insists that we must either conclude that "naturalism is incoherent" (and that, therefore, "theism wins") or we must accept his "tractable" straw man of naturalism as a general representative of naturalism.  Neither is an acceptable option, and his fork is false.

First, let's be perfectly clear: Barnes's straw man hypothesis is rubbish.  It is not equivalent to naturalism and it is not representative of naturalism. 
P(life-supporting-universe | Luke Barnes's Hypothesis) =/= P(life-supporting-universe | naturalism). 
This approach is a non-starter, and a well-known, rudimentary error in probabilistic inference.  While it is true that scientists do usually choose to restrict themselves to hypotheses for which the likelihoods are explicitly calculable (for obvious reasons) they also (if they are conscientious) restrict their conclusions commensurately. 

Barnes has not been conscientious in his argument.  Barnes has chosen to continue to argue for a conclusion that his argument is fundamentally incapable of supporting.  Again, this is not a flaw in the method of science.  This is a flaw in Barnes's choices of claim and hypothesis. 

Barnes's attempt to defend this poor decision amounts to the suggestion that, if we do not allow him to replace naturalism with his straw man, the naturalism is incoherent.  This is false. To see why, let's revisit his justification for this claim:

Quote
We do not confine our attention to a tractable problem, and admit that naturalism cannot tell us what physical universe we would expect to exist. No likelihood, no posterior. Naturalism avoids the challenge of fine-tuning by admitting that it is incoherent, and thus cannot be rationally believed.

First, we need to note that Barnes has once again erred on his theory, here.  A hypothesis does not need to lend itself to explicitly calculable likelihoods in order to "tell us what physical universe we would expect to exist," nor are such explicitly calculable likelihoods necessary in order for a hypothesis to be coherent, or for a hypothesis to be "rationally believed."  No-where is this more apparent than in Theism itself.  Theism does not lend itself to explicitly calculable likelihoods.  Theism does not tell us anything about what sort of universe to expect beyond what our background information tells us already.  Does this mean that Theism is incoherent, and that theism cannot be rationally believed?  If we accept Barnes's argument here, yes.  That's exactly what it tells us. 

That's fine with me, frankly.  Our debate is over whether "fine tuning" constitutes evidence for theism over naturalism.  If both are incoherent, unable to be rationally believed, then Barnes's claim is false and, as an added bonus, he has placed the decisive nail in Theism's coffin.  I have no particular attachment to naturalism.  This outcome would be perfectly acceptable.

But, of course, we can all see that it is wrong. 

The fact that theism does not offer us any sort of tractable likelihood calculation (as Barnes admitted in his first post) does not render theism incoherent.  That it offers us no expectations about the universe beyond what we get from our background information anyway does not render us unable to rationally believe it.  And, of course, the same is true for naturalism, whether Barnes likes it or not.

It remains the case that Barnes has no argument on the table at all. 

He has not offered any coherent evaluation of the likelihood of a life-supporting universe on naturalism,
P(LSU|N)

He has not offered any coherent evaluation of the likelihood of a life-supporting universe on theism,
P(LSU|T)

His present argument against the coherency of naturalism is methodologically flawed, and even if were not, it would be precisely as damning a when aimed at theism.


P.S.  I note that Barnes has challenged me to offer a better approach: one that avoids the problems I have pointed out in his.  Unfortunately, he seems to have forgotten that I have already done so, in my very first post.  I encourage him to go back and look at it.  It serves as an example of how we must moderate our claims when dealing with hypotheses (spaces of hypotheses, really) which are not amenable to explicit likelihood calculations.  If he insists on attempting to construct arguments against this sort of space, this is the sort of thing he ought to spend some time looking into. 

P.P.S. Whether Barnes prefers the term ad hoc or post hoc, the consequence of Theism's relationship to Humanity's knowledge that the universe supports life remains the same: Theism tells us nothing beyond what human background knowledge, extending far, far back beyond the inception of Theism itself, has to tell us about what to expect from the universe in terms of its ability to support life.  In short, P(LSU|T, B) = P(LSU | B).  LSU is, fundamentally, not evidence for theism, and this is actually true for all hypotheses which are constructed to explain well-known observations.  It's called, somewhat informally, the "problem of past evidence."  While the capacity of a hypothesis to "explain" some well-known fact might recommend it to us, this sort of post-hoc prediction is, almost without exception, not evidence for the hypothesis.  This is certainly true in the case of LSU and Theism.  In fact, here is another better approach: we can simply note that P(LSU | T, B) = P(LSU | B) = P(LSU | N, B) and terminate Barnes's argument at the most general level. 

« Last Edit: December 07, 2016, 01:35:22 am by cnearing »
P((A => B), A) = P(A => B) + P(A) - 1

7

LukeB

  • **
  • 6 Posts
    • View Profile
Re: DEBATE: Fine-tuning Constitutes Evidence for Theism over Naturalism
« Reply #7 on: December 07, 2016, 07:55:42 pm »
The "problem of past evidence"

The problem has been completely misunderstood. Take the classic paper by Glymour (http://fitelson.org/probability/glymour.pdf):
Quote
"Scientists commonly argue for their theories from evidence known long before the theories were introduced. ... Old evidence can in fact confirm new theory, but according to Bayesian kinematics it cannot."

Now, Glymour et al. are wrong, I contend, in saying that the Bayesian can't learn from old evidence. The proof (page 86 of Glymour) supposes that the Bayesian must condition every probability on everything we know. Then, the likelihood of old evidence is always 1. While this might be true of subjective Bayesians, it is not true of the approach of Cox, Jaynes et al. E is taken as given in the posterior p(T | EB), but not in the likelihood p(E | TB). The whole point of Bayes theorem is to move E "out the front". This is why one only sees the "problem of old evidence" in the philosophy literature - any competent statistician sees straight through it. Arguing that "P(LSU | T, B) = P(LSU | B) = P(LSU | N, B)" because B -> LSU is just incompetent. That's not how to use Bayes theorem.

But more importantly, no one - neither Bayes's philosophical critics (Glymour, Earman, van Fraasen, Ellis ...) nor defenders (Garber, Howson, Jeffrey ...) - argues that theories cannot be confirmed by old evidence. They are arguing the exact opposite - we can learn from old evidence, so any model of inference that can't is deficient. It's a criticism of Bayes, not post-hocness. Einstein's GR explains the perihelion shift of Mercury. Try telling a physicist that "GR does not tell us anything about the solar system beyond what our background information tells us already."

Seeing this, the argument that theism is ad hoc with respect to life evaporates.

Of Likelihoods

Did I demand "explicitly calculable likelihoods"? I addressed this in my opening: "the force of the fine-tuning argument is only turned back ... if the probability of God creating a life-permitting universe is comparable to one in 1090. That, I contend, is not much of a burden on the theist." I never demanded an exact number. Any honest approximation will do as a starting point.

The question is whether the objections to my approach are in principle or in practice. If they are in principle, then naturalism is not sufficiently well defined to generate even approximate likelihoods. If we can't get any handle on them at all, this strongly suggests that naturalism is too ill-defined to think about.

If it's an in practice problem, then critique my calculation. But these were aimed at straw men: I never claimed that "ECC is a fundamental descriptor of all physical reality", nor did I propose a uniform distribution over every fundamental constant. Having been told "We can’t model our ignorance in this case [the ECC] with a uniform probability distribution.  Uniform distributions can only cover finite spaces", we are later told that "he actually can choose non-arbitrary upper and lower bounds for ECC. That's fine."

Ta da!

A Better Approach?

No. Insisting I use the law of total probability to calculate a number I already know is an unnecessary complication, not a better method. It's like choosing the order in which to calculate 1 + 2 + 3. It doesn't matter.

Here's proof. Divide the range of naturalistic hypotheses (N) into those that can be described by quantum field theory (QFT) + GR (N1) and those that can't (N2).

        p(LSU|N) = p(LSU|N1) p(N1|N) + p(LSU|N2) p(N2|N)

Create a partition of N1 into members n according to the value of ECC (in Planck units) in that universe. Now, p(LSU|n) = 0 for all n such that abs(ECC) > 10^-90. Further suppose (implausibly but conservatively) that every naturalistic universe with abs(ECC) < 10^-90 supports life. Then,

  • N’  := {n | P(LSU|n) >= P(LSU|T)} is the set of n with abs(ECC) < 10^-90.
  • N’’ := {n | P(LSU|n) < P(LSU|T)}  is the set of n with abs(ECC) > 10^-90.

Now, aggregate. On N1 (given QFT+GR), the natural scale for the ECC is the Planck scale. So, conservatively, we model our ignorance using a uniform distribution between finite limits:

p(LSU|N1) = p(LSU|N')p(N'|N1) + p(LSU|N'')p(N''|N1)
        = 1 x 10^-90  +  0 x (1 - 10^-90)
        = 10^-90

Unsurprisingly, we get the same result. It's not a better approach. It's the same approach, hidden behind a smoke-screen of unnecessary complication.

Now, there are other terms in the expression for p(LSU|N). These are currently unknown. If they are in principle unknowable, then p(LSU|N) is in principle unknowable, and we'll never know whether naturalism is plausible in light of the evidence. On the other hand, if they are knowable but as yet unknown, then we can say is that the best handle we have on the problem tells us that naturalism is extraordinarily improbable. Which is exactly my point.

The Crunch

Does my calculation represent the likelihood of a life-permitting universe on naturalism in an as-best-we-know unbiased way? I gave my reasons. What are the reasons against?

We are told:
  • "Barnes's straw man hypothesis is rubbish."
  • "It is not equivalent to naturalism and it is not representative of naturalism."
  • "P(life-supporting-universe | Luke Barnes's Hypothesis) =/= P(life-supporting-universe | naturalism). "
  • "This approach is a non-starter, and a well-known, rudimentary error in probabilistic inference."

Still no reasons, only repetition. Finally,
  • "While it is true that scientists do usually choose to restrict themselves to hypotheses for which the likelihoods are explicitly calculable (for obvious reasons) they also (if they are conscientious) restrict their conclusions commensurately."

Having very carefully and deliberately explained from the beginning that I was restricting myself to a subset of the problem, this objection has no force. Again: the best handle we have on the problem tells us that naturalism is extraordinarily improbable. And because it is not biased strongly against naturalism (if anything, it's biased in favour), we have reason to believe that this is not just the best handle but a good handle.

So no reason has been given that my calculation is "not representative of naturalism".

Who's burden?

One final curiosity. At best, the likelihood of a life-permitting universe on naturalism is, at present, completely unknown. Note, then, the following comments about the supposed "better approach":
  • "this is essentially what Barnes has to address if he wants to salvage his argument"
  • "If he insists on attempting to construct arguments against this sort of space, this is the sort of thing he ought to spend some time looking into."

But wait a moment: why is it up to me to calculate the likelihood of the data on naturalism? Shouldn't the naturalist be champing at the bit to test their ideas against evidence? For all the talk amongst "the brights" of the importance of evidence and being open to changing one's mind, and with physics handing us almost everything we need on a platter, there is a curious lack of enthusiasm. I'm the only one who calculated anything.

At the very least, we can say that the naturalist has no idea whether naturalism is plausible or not in light of the life-permitting nature of our universe. The theist who sees a natural connection between a good God and the creation of moral agents is in the rationally superior position of knowing that their hypothesis can explain the facts. The theist is unsurprised; the naturalist has no idea what to think.

8

cnearing

  • ***
  • 2677 Posts
    • View Profile
Re: DEBATE: Fine-tuning Constitutes Evidence for Theism over Naturalism
« Reply #8 on: December 08, 2016, 01:40:40 am »
The problem of past evidence

I agree with Barnes that the Bayesian approach does not leave us fundamentally incapable of updating on past evidence, as Glymour et. al. suggest, but this is not at all the whole of the story.  As Barnes points out, the Bayesian approach requires us to separate out some observation of interest from out background data. 

The problem is that this is not always doable.  In particular, there are two things we need to watch out for:

First, cases where the observation we pick out from our background information is not one for which our hypotheses offer explicitly calculable likelihoods.  The problem here is that, without explicitly calculable likelihoods, we rely on intuition or other subconscious heuristics, and we can reliably expect these to be informed by information we have internalized.  Even if we say that we are pulling the observation out of background data, we literally lack the mental machinery required to actually do that.

Second, cases where the observation can't be removed from our background information, because it is foundational to the process of inference itself.

The observation that universe supports life, in the context of Barnes's argument, falls into both categories.  First, Barnes is asking us to estimate likelihoods for a fact that is at the root of our background knowledge.  It is completely unreasonable to expect a human to do this, and no competent Bayesian would do so.

In addition, the observation that the universe supports life is logically inextricable from our background information.  The act of inference requires that we are indeed alive. 

One can only take Barnes's approach if one can reasonably separate the observation of interest from one's background information.  This is impossible in the case of LSU. 

Barnes has not escaped the problem of past evidence. 

Of likelihoods and a better approach

The objection to Barnes's approach is an in principle objection, but, amazingly, he still seems not to have understood it.  My in-principle objection does not state that naturalism is not sufficiently well defined to get a handle on, but that Barnes's representative hypothesis is not, in fact, representative at all.

Barnes goes on to write, in response to my suggested approach, that

Quote
Insisting I use the law of total probability to calculate a number I already know is an unnecessary complication, not a better method.

But, of course, he certainly doesn't know the numbers that his argument requires: p(LSU|N) and p(LSU|T).  The only number he has calculated is the probability of selecting a value for ECC that lies within a certain range out from under a uniform distribution.  It is very rare that one can replace a difficult problem with a more tractable one and still getan answer to the difficulty problem.  This is what Barnes has tried to do, and this is not a case where that approach can work


Is this P(LSU|N)?  Well, as the law of total probability tells us, no.  It isn't. 

Is it an upper bound on P(LSU|N)?  Again, as the law of total probability shows, no.  It isn't.

It isn't actually a relevant number at all.  This is the central problem in Barnes's argument.  It rests on the ratio between two likelihoods, and Barnes has calculated neither.  Barnes knows neither.  Barnes has no meaningful estimate of either.  Barnes *can't* have a meaningful estimate of either, because the likelihood is not explicitly calculable for either hypothesis, and the observation cannot be reasonably separated from our background information.

Barnes would like to paint my objections to his argument as objections to the methodology he has attempted to employ, but that is simply not correct.  The methodology is generally fine--it just doesn't do what Barnes is trying to use it to do.  Bayesianism works, as long as you make careful choices.

Barnes just hasn't. 

Instead he has chosen an observation that cannot be reasonably separated from our background information and two hypotheses which fail to allow for explicit calculations for the likelihood of that observation.  This says nothing about the viability of naturalism or theism (much less science or Bayesianism).  The fact that Barnes cannot actually use his chosen observation to argue for or against naturalism is not a failure of naturalism as a theory.  We can use probability theory to argue for or against naturalism along other avenues, just as we can with theism.  Again, the problem lies entirely with Barnes's poor choices. 

He writes,

Quote
Now, there are other terms in the expression for p(LSU|N). These are currently unknown. If they are in principle unknowable, then p(LSU|N) is in principle unknowable, and we'll never know whether naturalism is plausible in light of the evidence. On the other hand, if they are knowable but as yet unknown, then we can say is that the best handle we have on the problem tells us that naturalism is extraordinarily improbable. Which is exactly my point.

Both claims are trivially false. 

Even if p(LSU|N) is unknowable, so what?  All this would mean is that Barnes has chosen a poor piece of evidence for his argument against naturalism.  He says "the evidence," but what he actually means is "the single piece of evidence he has attempted to use."  The problem cannot be generalized to all evidence.

Let's say that p(LSU|N) is unknown, but potentially knowable.  Does this mean that he has offered the best available handle on the problem?  Not at all.  He hasn't offered any handle on the problem.  By ignoring nearly the entire space of naturalistic hypotheses, he has managed to calculate only one number which has almost no bearing on the actual term of interest, p(LSU|N).

It remains that there is no argument, and this (contrary to Barnes's claim) is something I have explained time and time again.

The Burden

In closing, Barnes writes,

Quote
But wait a moment: why is it up to me to calculate the likelihood of the data on naturalism? Shouldn't the naturalist be champing at the bit to test their ideas against evidence?

I am not a naturalist.  I have no interest in defending naturalism.  My interest is in the methodology: I value the method of probabilistic inference, and my goal here is to defend it against Barnes's perversion of it.  I am not arguing for naturalism, or indeed against theism.  That isn't the topic of our debate.

The topic of our debate is whether fine-tuning constitutes evidence for theism over naturalism.  The observations, LSU and fine tuning, were chosen by Barnes.  The hypotheses, theism and naturalism, were chosen by Barnes.  The focal point of our debate is Barnes's argument for the claim that P(LSU|N) < P(LSU|T), and his claim that fine tuning somehow plays a role in establishing this.  His goal is to defend his evaluation of these two likelihoods, and he has categorically failed in that regard.  That was his burden, and he has fumbled it.

My burden is to show that his attempt to evaluate these two likelihoods is flawed.  I have done so.  His evaluations are not flawed because he has chosen to use probability theory or the Bayesian formalism, but because he has chosen hypotheses and observations which are not amenable to the calculations he actually needs to perform. 

Naturalists and theists are equally unsurprised by the observation that the universe supports life.  It simply isn't an observation that has any leverage on either of these beliefs. The theist knows that his hypothesis is one that can "explain" this fact (which is unsurprising, since that is what it was crafted to do).  The naturalist knows that there are any number of naturalistic hypotheses which can explain this fact as well or better than theism.  Neither is perturbed at all, and if they are competent they will recognize that Barnes has butchered probability theory and the Bayesian formalism far beyond the point where his argument can make any meaningful contribution to their debate.
« Last Edit: December 08, 2016, 02:20:51 am by cnearing »
P((A => B), A) = P(A => B) + P(A) - 1