Assumptions for Naturalness

Why did physicists expect to see something new at the LHC, more than just the Higgs boson? Mostly, because of something called naturalness.

Naturalness, broadly speaking, is the idea that there shouldn’t be coincidences in physics. If two numbers that appear in your theory cancel out almost perfectly, there should be a reason that they cancel. Put another way, if your theory has a dimensionless constant in it, that constant should be close to one.

(To see why these two concepts are the same, think about a theory where two large numbers miraculously almost cancel, leaving just a small difference. Take the ratio of one of those large numbers to the difference, and you get a very large dimensionless number.)

You might have heard it said that the mass of the Higgs boson is “unnatural”. There are many different physical processes that affect what we measure as the mass of the Higgs. We don’t know exactly how big these effects are, but we do know that they grow with the scale of “new physics” (aka the mass of any new particles we might have discovered), and that they have to cancel to give the Higgs mass we observe. If we don’t see any new particles, the Higgs mass starts looking more and more unnatural, driving some physicists to the idea of a “multiverse”.

If you find parts of this argument hokey, you’re not alone. Critics of naturalness point out that we don’t really have a good reason to favor “numbers close to one”, nor do we have any way to quantify how “bad” a number far from one is (we don’t know the probability distribution, in other words). They critique theories that do preserve naturalness, like supersymmetry, for being increasingly complicated and unwieldy, violating Occam’s razor. And in some cases they act baffled by the assumption that there should be any “new physics” at all.

Some of these criticisms are reasonable, but some are distracting and off the mark. The problem is that the popular argument for naturalness leaves out some important assumptions. These assumptions are usually kept in mind by the people arguing for naturalness (at least the more careful people), but aren’t often made explicit. I’d like to state some of these assumptions. I’ll be framing the naturalness argument in a bit of an unusual (if not unprecedented) way. My goal is to show that some criticisms of naturalness don’t really work, while others still make sense.

I’d like to state the naturalness argument as follows:

  1. The universe should be ultimately described by a theory with no free dimensionless parameters at all. (For the experts: the theory should also be UV-finite.)
  2. We are reasonably familiar with theories of the sort described in 1., we know roughly what they can look like.
  3. If we look at such a theory at low energies, it will appear to have dimensionless parameters again, based on the energy where we “cut off” our description. We understand this process well enough to know what kinds of values these parameters can take, starting from 2.
  4. Point 3. can only be consistent with the observed mass of the Higgs if there is some “new physics” at around the scales the LHC can measure. That is, there is no known way to start with a theory like those of 2. and get the observed Higgs mass without new particles.

Point 1. is often not explicitly stated. It’s an assumption, one that sits in the back of a lot of physicists’ minds and guides their reasoning. I’m really not sure if I can fully justify it, it seems like it should be a consequence of what a final theory is.

(For the experts: you’re probably wondering why I’m insisting on a theory with no free parameters, when usually this argument just demands UV-finiteness. I demand this here because I think this is the core reason why we worry about coincidences: free parameters of any intermediate theory must eventually be explained in a theory where those parameters are fixed, and “unnatural” coincidences are those we don’t expect to be able to fix in this way.)

Point 2. may sound like a stretch, but it’s less of one than you might think. We do know of a number of theories that have few or no dimensionless parameters (and that are UV-finite), they just don’t describe the real world. Treating these theories as toy models, we can hopefully get some idea of how theories like this should look. We also have a candidate theory of this kind that could potentially describe the real world, M theory, but it’s not fleshed out enough to answer these kinds of questions definitively at this point. At best it’s another source of toy models.

Point 3. is where most of the technical arguments show up. If someone talking about naturalness starts talking about effective field theory and the renormalization group, they’re probably hashing out the details of point 3. Parts of this point are quite solid, but once again there are some assumptions that go into it, and I don’t think we can say that this point is entirely certain.

Once you’ve accepted the arguments behind points 1.-3., point 4. follows. The Higgs is unnatural, and you end up expecting new physics.

Framed in this way, arguments about the probability distribution of parameters are missing the point, as are arguments from Occam’s razor.

The point is not that the Standard Model has unlikely parameters, or that some in-between theory has unlikely parameters. The point is that there is no known way to start with the kind of theory that could be an ultimate description of the universe and end up with something like the observed Higgs and no detectable new physics. Such a theory isn’t merely unlikely, if you take this argument seriously it’s impossible. If your theory gets around this argument, it can be as cumbersome and Occam’s razor-violating as it wants, it’s still a better shot than no possible theory at all.

In general, the smarter critics of naturalness are aware of this kind of argument, and don’t just talk probabilities. Instead, they reject some combination of point 2. and point 3.

This is more reasonable, because point 2. and point 3. are, on some level, arguments from ignorance. We don’t know of a theory with no dimensionless parameters that can give something like the Higgs with no detectable new physics, but maybe we’re just not trying hard enough. Given how murky our understanding of M theory is, maybe we just don’t know enough to make this kind of argument yet, and the whole thing is premature. This is where probability can sneak back in, not as some sort of probability distribution on the parameters of physics but just as an estimate of our own ability to come up with new theories. We have to guess what kinds of theories can make sense, and we may well just not know enough to make that guess.

One thing I’d like to know is how many critics of naturalness reject point 1. Because point 1. isn’t usually stated explicitly, it isn’t often responded to explicitly either. The way some critics of naturalness talk makes me suspect that they reject point 1., that they honestly believe that the final theory might simply have some unexplained dimensionless numbers in it that we can only fix through measurement. I’m curious whether they actually think this, or whether I’m misreading them.

There’s a general point to be made here about framing. Suppose that tomorrow someone figures out a way to start with a theory with no dimensionless parameters and plausibly end up with a theory that describes our world, matching all existing experiments. (People have certainly been trying.) Does this mean naturalness was never a problem after all? Or does that mean that this person solved the naturalness problem?

Those sound like very different statements, but it should be clear at this point that they’re not. In principle, nothing distinguishes them. In practice, people will probably frame the result one way or another based on how interesting the solution is.

If it turns out we were missing something obvious, or if we were extremely premature in our argument, then in some sense naturalness was never a real problem. But if we were missing something subtle, something deep that teaches us something important about the world, then it should be fair to describe it as a real solution to a real problem, to cite “solving naturalness” as one of the advantages of the new theory.

If you ask for my opinion? You probably shouldn’t, I’m quite far from an expert in this corner of physics, not being a phenomenologist. But if you insist on asking anyway, I suspect there probably is something wrong with the naturalness argument. That said, I expect that whatever we’re missing, it will be something subtle and interesting, that naturalness is a real problem that needs to really be solved.

29 thoughts on “Assumptions for Naturalness

  1. nostalgebraist

    Thanks, this is a framing I hadn’t quite seen before. (I had read the post by Matt Strassler and had at least sort of understood it, but I would not have been able to state your 4-point argument after only reading that post.)

    Framed in these terms, I guess my “objection” to the argument is that I don’t understand what point 1 really means. Naively, it seems like you can always take two theories and say that they differ by a “parameter,” where the parameter is some sort of index that tells you which theory to look up from a table of theories. Informally, I can see that these indexing parameters tend to look different from the sorts of parameters we’re worried about here. For instance, when we want to single out a specific theory, we use integers (specifying, say, the dimension of something), or the names of certain groups or differential operators, not real numbers. (Sometimes we can frame a distinction as “is this real number equal to zero?”, but that’s a very different thing from talking about some exact nonzero real value.)

    Mathematically, all of these parameter spaces (integers, groups, operators) have somewhat different properties from the real numbers, so it is possible to imagine a mathematically natural (!) sense in which having to specify elements of these spaces is “okay” but having to specify elements of the real numbers is “not okay.” But I never see this mathematical distinction made precise in these discussions. I’d like to know, first, whether this distinction has been made precise by someone somewhere, and second, why (in terms of experience with other physical theories?) this sort of distinction strikes particle physicists as intuitive (whether or not it has been made precise).

    Like

    Reply
    1. 4gravitonsandagradstudent Post author

      Discrete vs continuous is indeed probably the appropriate distinction here, though I don’t think I’ve ever seen it formally set out anywhere (again, people usually aren’t explicit about point 1. to begin with!)

      I think from a physicist’s perspective, the discrete choice of theory isn’t really a matter of selecting a number from a lookup table, it’s about setting out some physics-motivated conditions and seeing if there is a unique theory that meets these conditions. So M theory is conjectured to be the unique theory that has all five superstring theories as limits, which in turn are the only consistent superstring theories. If you’ve managed to convince yourself that’s the right direction to look in, then there isn’t a choice of theory, there’s just one (admittedly vague and conjectural) theory you can choose.

      More broadly, that’s the sort of thing that a “final theory” would have to be. In order for a theory to really be final, it would have to pin you into a corner in some way, such that it was the only theory that “made sense”. Otherwise, you can find a “more final theory” by figuring out the reason why you chose that theory and not another one. Maybe you can only ever really do that by imposing some conditions and arguing they’re reasonable, but that still seems qualitatively distinct from a continuous tunable parameter.

      Like

      Reply
  2. Jacques Distler

    I’m not quite sure I understand your point 1. (Let me, for the purposes of this comment, ignore gravity and phrase my question in the language of Wilsonian Quantum Field Theory. That will have the advantage of illustrating the source of my confusion, without getting bogged down in the murkiness of M theory.)

    I think that, by the phrase “no free dimensionless parameters,” you mean that the conformal manifold of the UV fixed point should be zero-dimensional. But that’s not all there is: a continuum quantum field theory (à la Wilson) is an RG trajectory emanating from a UV fixed point. Even if the fixed point is isolated, the space of relevant deformations is almost invariably greater than 1-dimensional. Which means that there are (n-1) dimensionless parameters needed to label which RG-trajectory we are on.

    I don’t think you can get around talking about the behaviour of those RG trajectories (and why zero Higgs mass is repulsive). The whole thrust of point 1 is that there is new physics at some scale. What we’re trying to rule out is that there is a very large “desert” between the electroweak scale and that new scale. A very fine account of points 2-4 is here.

    Like

    Reply
    1. 4gravitonsandagradstudent Post author

      You’re probably right, but I’m a bit too rusty with respect to the Wilsonian picture of QFT to see it. At the risk of signaling my ignorance rather loudly, while I get why you can have multiple RG trajectories in a condensed matter system where you have temperature and the like, in a high-energy physics context I’m having trouble thinking about what this means in the usual picture of starting with a Lagrangian and coarse-graining. Can you explain where multiple RG trajectories come from from that perspective?

      The role of point 1. isn’t merely to establish that there is new physics at some scale, but that the world can’t “just be fine-tuned”. That is, some critics of naturalness seem to be tempted to bite the bullet and say that the world just has fine-tuned parameters in it, the point is that if you believe point 1. you can’t do that because ultimately there shouldn’t be any free parameters, just unknown high-energy physics, and any way you fix the parameters in some intermediate theory has to be a consequence of the higher-energy theory.

      Like

      Reply
      1. Jacques Distler

        Can you explain where multiple RG trajectories come from from that perspective?

        Sure.

        At the fixed point theory, we can divide the (infinite-dimensional) space of local operators into “irrelevant,” “marginal” and “relevant.” My interpretation of your point 1 is that there are no marginal perturbations. If we perturb the theory by an irrelevant operator, we just flow back to the orgininal fixed point. What remains are the (finite number of) relevant perturbations. Turning on any combination of relevant perturbations induces an RG flow. To specify which RG trajectory we are on (i.e., which continuum QFT we are describing), requires (n-1) dimensionless parameters.

        That this number is finite is the generalized notion of renormalizability à la Wilson.

        Like

        Reply
        1. 4gravitonsandagradstudent Post author

          More stupid questions, so please bear with me:

          In practice, in specific physical systems, what determines which relevant operators are turned on and which aren’t?

          I’m more used to thinking about this from the perspective of matching parameters in EFT. You’ve got some fixed high-energy theory and a low-energy theory with some higher-dimension operators, and you fix the coefficients of those operators by computing in both theories at the cutoff scale and matching. From that perspective, I don’t see where any ambiguity arises in which operators are turned on and which aren’t, it’s fixed by matching to the high-energy theory. In this framing, I get how you can have a single IR theory that is linked by RG trajectories to multiple UV theories, but not the reverse. (Which yes almost certainly means I’m missing something quite basic, can you enlighten me?)

          One question which might help clarify matters: suppose I have some specific physical system, like the electrons in a piece of metal with fixed external conditions. In the Wilsonian point of view, is this described by a single RG trajectory, or by more than one?

          Another thing, which may be the initial point of confusion: the UV theory in point 1. should not be a CFT, because of course we do not live in a CFT. Maybe in Wilsonian language you would describe this as a UV CFT with some specific relevant deformations? (Talking out my ass here, again I’m shamefully quite unfamiliar with the Wilsonian approach.) If so, by point 1. I mean not that the UV theory should have no marginal deformations, but that in the UV all local operators, relevant, irrelevant, and marginal, should have their coefficients fixed by whatever conditions specify the theory.

          Like

          Reply
        2. 4gravitonsandagradstudent Post author

          To avoid putting my foot in my mouth too much with the last point there: the theory in point 1. could be a spontaneously broken CFT, it just can’t be an unbroken CFT. (Additional stupid question: is the choice of RG trajectory related to the choice of vacuum?)

          Like

          Reply
          1. Jacques Distler

            OK, so let’s review Wilson’s approach to QFT.

            In grad school, we learned about renormalizable field theories, which have a finite number of couplings, which run under RG. In order for the RG equations not to break down at short distances, we concentrated on a special class of renormalizable theories (“asymptotically free” theories) where the short distance behaviour is arbitrarily weaky coupled.

            In the 1970s, Wilson pointed out that this is a very special case of a more general story, where the ultraviolet behaviour is governed by a CFT (an RG fixed-point theory). The asymptotically-free case is just the one where the UV fixed point is Gaussian (“free”).

            But, if we’re brave enough, we can contemplate other cases (this goes by the name “asymptotic safety” in some circles) where the UV fixed point is some interacting CFT.

            In the CFT, operators are graded by their (exact!) scaling dimensions, and operators with Δ<d are called relevant. If we perturb the CFT by adding a relevant operator to the “Lagrangian”, we get a theory with a nontrivial RG-running.

            A continuum QFT, according to Wilson is just one such trajectory. Of course, a CFT typically has more than one (but still a finite number of) relevant operator(s). If there are n such operators, then we have to specify n numbers, in order to say exactly which relevant perturbation we turn on. One combination of these n numbers gets traded for a dimensionful scale, leaving (n-1) dimensionless parameters to label the distinct continuum QFTs. These are exactly analogous to the renormalizable couplings that we learned about in grad school.

            In special cases (familiar to those of us who spend too much time thinking about supersymmetric theories), there’s another thing you can do to induce a nontrivial RG flow, namely move out along a moduli space of vacua of the original CFT.

            I’m assuming that you expect that any would-be moduli space of vacua is lifted in any putative theory of the real world (there being no exactly massless scalars). So that latter option isn’t available.

            Like

            Reply
            1. 4gravitonsandagradstudent Post author

              Thanks for the review. I’ll have to think more about this, but I think what I mean from this perspective is that in a “final theory” there should be one particular RG trajectory that’s singled out in a way that doesn’t leave any parameters free. But I may still be confusing myself in some fashion. Indeed, if said theory would otherwise have a moduli space it needs to be lifted in some way.

              Like

              Reply
                1. 4gravitonsandagradstudent Post author

                  Ok, I get that you’re trying to be pedagogical here but if you’re trying to make a point please just make it. It sounds like you want to argue that the framing in this post, even if it might make sense for the Higgs naturalness problem, doesn’t make sense for the cosmological constant naturalness problem. I don’t really see any meaningful difference. Point 4. changes, because supersymmetry doesn’t solve the cosmological constant problem, so instead of “thus we should see new particles” the conclusion is “thus we reach a contradiction, and we probably are missing something important”. But that doesn’t seem to have anything to do with what you brought up above, so I’m assuming that’s not the point you’re intending to make here.

                  Like

                  Reply
                  1. Jacques Distler

                    Zero cosmological constant is RG-unstable in the same way that zero Higgs mass is unstable. Standard naturalness arguments suggest that we ought to see new physics at a scale not much higher than the electroweak scale. When those same arguments are applied to the cosmological constant, they fail miserably. From an effective field theorist’s perspective, the latter is just fine-tuned. Full stop.

                    You are, apparently, claiming something stronger: when embedded in a “final theory,” “ought” is changed to “must.” There must be new physics at some scale not too much higher than the electroweak scale.

                    I don’t quite understand the argument, but I was wondering what the same line of reasoning would say about the cosmological constant.

                    Like

                    Reply
                    1. 4gravitonsandagradstudent Post author

                      Ah ok, that’s all you were going for?

                      The point of changing “ought” to “must” here is to clarify that it’s not that we care about fine-tuning because it’s unlikely in some known probability distribution, but because it represents something we don’t understand and still need to explain.

                      I don’t think it’s controversial that the CC problem is something we need to explain. I don’t think I’ve ever met anyone who thinks the CC is “just fine-tuned” (though maybe I don’t hang out with enough effective field theorists 😛 ). I’ve met people who think it’s determined anthropically, I’ve met people who think it’s a sign that there’s some deep quantum gravity mechanism we don’t understand, I’ve met people who think it shows that our whole notion of fine-tuning, or naturalness, or the RG, is off the mark. But I have trouble understanding the mindset that it’s “just fine-tuned”, and before now I don’t think I’ve seen it in practice.

                      Like

                    2. Jacques Distler

                      Absolutely we need to understand the smallness of both the cosmological constant and the Higgs mass. My point was just that there is no new physics at a scale somewhat above the cosmological constant scale* that explains the smallness of the latter. Hence (I would surmise) there can’t be an argument which concludes that there must be new physics there.

                      Mutatis mutandis for the Higgs mass.

                      So, while I still think there ought to be new physics at a scale not too much higher than the electroweak scale, I have no reason to believe that there must be.

                      Now, if you believe Vafa and collaborators, there is no cosmological constant (the accelerated expansion that we see must be the result of some quintessence-like scalar field — which introduces a host of other problems). While that does introduce new physics (fluctuations of the aforementioned scalar field) at a low energy scale, it doesn’t really solve the fine-tuning problem.

                      Like

                    3. 4gravitonsandagradstudent Post author

                      Sure, I agree. This is merely an argument that if you accept the other assumptions there must be the corresponding new physics. As I mention at the end of the post, my suspicion is that we do indeed need to reject some of the other assumptions!

                      Like

  3. Riccardo Di Sipio

    Regarding point 1, I have the impression this implies that any self-consistent theory with no free parameters has to be based on natural numbers. I’m not saying I agree with that, and I’m also afraid that such a claim can open the gates of pseudo-science and numerology. Yikes.

    Like

    Reply
    1. 4gravitonsandagradstudent Post author

      Natural numbers, pi, and the like, yeah.

      I agree there’s a risk of numerology there, but I think it’s a milder one than most. When numerology gets used in a pseudoscientific way, it’s generally trying to argue that some number is some specific combination of other numbers, and to draw some conclusion from that without a plausible physics case. Here, the numerology aspect is much more general, it’s just arguing that certain kinds of numbers seem to be reachable in QFT, and others don’t. I agree that this is still risky, it’s an argument from experience in an area where we may simply not have enough experience. I don’t think you can make a naturalness argument without invoking something “numerological” like this.

      Like

      Reply
      1. Riccardo Di Sipio

        I agree with what you say. What I had in mind is something like Riemann’s hypothesis: the distribution of prime numbers is a function basically without any free parameters. Once you have “invented” (or “discovered”?) natural numbers, it’s an embedded feature of that group. Similarly, I think that maybe particles are in fact “something else” if seen from a different, deeper paradigm, but if that “something else” is based on some mathematical group (a claim that does not seem too far fetched to me), then its elements may have some “built-in” relationships that have no free parameters. In the end, I don’t see any easy way how not to include somehow real numbers (think of symmetries), hence integer and natural numbers (I’m thinking here of the Dedekind–Peano axioms).

        Like

        Reply
        1. 4gravitonsandagradstudent Post author

          Sure, I agree that there’s nothing that rules out real-numbered parameters per se (again, powers of pi are certainly not natural numbers). It’s a question of specifically what kinds of numbers you can get playing around with QFT, not what numbers you can get more generally.

          Liked by 1 person

          Reply
          1. Riccardo Di Sipio

            Hi,
            and sorry for the late reply. Btw, I’m really enjoying the conversation! When I wrote about a theory coming out from natural numbers, what I actually had in mind was the late Micheal Atiyah’s Fine Structure Constant explanation. I went through the “proof” (I can send you the .pdf of the preprint) and I think it’s either too obscure to me or just complete nonsense, but I also believe he grasped somehow something interesting. If I understood his point correctly, he’s basically saying that there are only four self-consistent number systems: reals, complex, quaternions and octonions. My personal remake, set it aside for now, is that all of them stem in some way from natural numbers. So Atiyah’s conjecture is that given e.g. the reals, there is a structure related to what he called Todd’s functions (with no free parameters) that in a given limit gives a fixed value that can be interpreted as the fine structure constant. Nonsense or enlightenment? Dunno. Then, he claims that:
            * Reals -> EM
            * Complex -> Weak force
            * Quaternions -> Strong force
            * Octonions -> Gravity (“but the proof is more complicated”)

            Since there are no more than four number systems, there can only be four fundamental interactions, i.e. the ones that we have discovered so far. But then, where are fermions coming from?
            Let me as once again: Nonsense or enlightenment? Whatever you believe of all of this, I think Atiyah’s conjecture goes towards the goal of finding a fundamental theory with no free parameters.

            Like

            Reply
            1. 4gravitonsandagradstudent Post author

              A lot of people have played around with octonions, for some of the reasons you describe (though I think most of those attempts are a bit more rigorous/physically motivated than whatever Atiyah was doing). I don’t think most of those approaches have the identifications you’re talking about (Reals->EM and the like), because it’s pretty easy for something as loose as “four things”->”four things” to end up as a coincidence (and AFAIK you need the octonions to get SU(3)).

              I certainly agree that Atiyah had the sort of goal you describe, in the language of the post he believed in something like Point 1. He wasn’t really approaching the problem in a quantum-field-theory-ish way, though. He was trying to derive the low-energy behavior of E&M from Todd functions, if that sort of thing made sense then some element of points 2-3 would be wrong, since one of the most important implications of those points for the naturalness argument is that low-energy physics isn’t “fundamental”. I get the impression that Atiyah’s argument really thoroughly didn’t work (to the point that there were calculations described in the paper that just don’t give the numbers he claims they do), but again if something like that worked it would be a (quite unusual) solution to the naturalness problem.

              Liked by 1 person

              Reply
  4. ohwilleke

    “The way some critics of naturalness talk makes me suspect that they reject point 1., that they honestly believe that the final theory might simply have some unexplained dimensionless numbers in it that we can only fix through measurement.”

    Point 1 seems incredibly presumptuous to me.

    I think that part of the problem is the assumption that a “final theory” from which everything follows from first principles is attainable or a reasonable goal with technology that we may have someday. Certainly, there is probably some way that we can come up with fewer free parameters than we have now, we’ve got more than two dozen of them between the SM, GR and a few strays not always included in the list (e.g. Planck’s constant). Maybe we can go from 30 parameters to half a dozen if we’re lucky before humanity goes extinct. But, we are in the business of going from SM + GR to SM + GR 2.0, which may very well be many iterations away from “the final theory”.

    Indeed, when we went from the proton-neutron-electron-photon-Newtonian gravity model, to the modern quark-charged lepton-neutrino-V boson-photon-gluon-GR model, we actually went from a less complicated model of the universe to a more complicated one, and it could easily get worse before it gets better. We hope it doesn’t, but it could.

    A related argument is the argument that neutrinos must be Majorana, or that Flavor Changing Neutral Currents or Proton Decay or Neutrinoless Double Beta Decay must occur (in the absence of some really strong constraints on the potential magnitude of all of those phenomena), because aggregate baryon number and aggregate lepton number at the time of the Big Bang has to be zero, so there must be some significantly baryon number or lepton number violating process beyond the SM.

    Only, we have absolutely nothing in the way of empirical evidence that compels that conclusion. Indeed, all available experimental evidence points to precisely the opposite conclusion, that B and L were not zero at t=o.. We know that aggregate mass-energy in the universe was some finite non-zero number at the time of the Big Bang, which is entirely arbitrary and which we seem to have no problem with, and there is no compelling reasons other than that it looks “beautiful” that B or L have to be zero at the time of the Big Bang either. And, even if there is some deep reason in a “final theory” why the initial conditions of the universe have the values of B and L that they had, in all likelihood, humanity has no means to figure out what that reason is because try as we might we aren’t located in a place where we can see what we need to see to find out. We can’t go back to the very instant of the Big Bang (or it context in some multiverse) to study that question any more than a fish is in a position to discovery why water has the molecular mass that it does, or why the Earth has the exact finite amount of water that it does.

    Ultimately, assumptions made about what the laws of nature should be or what the physical constants in that theory should be are exceedingly presumptuous. If Nature has decided that the strong force does not have CP violation, then it doesn’t and the fact that it doesn’t isn’t something that needs to be explained. Or, maybe there are different kinds of explanations (e.g. that forces with massless carrier bosons should have CP violation since the carrier bosons don’t experience the passage of time since they are always traveling at exactly “c”), that explain it without resort to axions or other contortions that we haven’t thought of. All of these arguments presume that we know much more than we do.

    If numbers like the Higgs boson mass seem crazy, maybe we’re just looking at the problem wrong. For example, it turns out that to the full extent possible to confirm this was existing measurements, the sum of the square of the masses of the fundamental particles in the SM is equal to the square of the Higgs vev (put another way, the sum of the Yukawas or Yukawa equivalents in the SM equal exactly a dimensionless 1, the “most natural number”). If we’d guessed that this relationship had existed, knowing what we did in the late 1990s when the top quark’s mass had been determined, we could have estimated the Higgs boson mass to great precision and there is nothing notable or improbable or finely tuned about it. We just end up explaining why the physical constants have their values from a different perspective than the clunkier way of looking at it that physicists who conceptualized the hierarchy problem did. Maybe we find that this sum of Yukawas equals 1 is profound and lead to other consequences, and maybe we don’t an it remains an “accidental symmetry” until we learn something else that we may never learn because we can’t observe what causes it.

    If the world looks “unnatural” it is far more likely that we are looking at the universe from the wrong perspective due to collectively shared habits from a common scientific education, and hence we are collectively all looking at the problem wrong, and not necessarily because there are any “new physics” to be discovered that solve these non-problems. The hierarchy problem, the strong CP problem, the baryogenesis and leptongenesis problems, the coincidence problem and more really don’t deserve to be called “problems”. Nature is what it is and we shouldn’t presume that it will bend to our uninformed preconceptions of what a final theory (and more realistically, the most final theory we can manage to discern) will look like, any more than we should assume that cows are spherical.

    When we talk about Naturalness, which is about what we think that Nature “should look like”, we have crossed over from science to natural philosophy, which is not science and is no more meaningful that theological discussions over how many angels can fit on a pinhead. Assuming that the laws of Nature fit assumption (1) really has nothing to support it than gut feelings about beauty and there is no good reason to think that the world is as neat and tidy as a few day dreaming theoretical physicists wish that it were.

    Like

    Reply
  5. Manish Oza

    I’m not a physicist, so this really helped me understand what’s going on in ‘naturalness’ arguments – thanks! I wanted to ask about Point 1. Could you say more about why this follows from the concept of a final theory? I would’ve thought a final theory is (roughly) a theory that covers all physical phenomena and is simpler than any other theory that does the same. But I’m not sure why that precludes dimensionless numbers that we just have to measure.

    Philosophers sometimes talk about different senses of ‘possibility’ – for example, what’s logically or mathematically possible is broader than what’s physically possible. So a world where things could go faster than light would have to have different physical laws than ours, but it wouldn’t be logically impossible. The assumption is that, whatever the final theory is, there are logically possible alternatives to it. But it sort of sounded to me like, in your view, a final theory has to be the only logically possible one. Is that right?

    Like

    Reply
    1. 4gravitonsandagradstudent Post author

      (Disclaimer: I’m mostly trying to capture what seem like prevalent intuitions among physicists. I don’t pretend I have a good argument for them, let alone a philosophically rigorous one.)

      I think logical necessity is too strict a requirement. I think what most physicists expect is something more like logically necessary, given certain (qualitative) assumptions. That is, a final theory doesn’t need to be the only theory that is mathematically and physically consistent, but I should be able to list a set of (sufficiently deep-seeming) principles such that the final theory is the only theory that obeys them. I used M theory as the example in the post, M theory is conjectured to be the unique theory that contains all of the possible superstring theories as limits, and there are people who would further conjecture that it is the only consistent theory of gravity. Either would fulfill this sort of criterion, you’re specifying the theory uniquely by reference to some property it has, and you don’t have any more free numbers to tweak. Provided these “deep-seeming principles” are “good enough”, people might be tempted to call this “physically necessary”: not in the sense of physical necessity you refer to in your comment, but in the sense that any other world wouldn’t “make physical sense”.

      As to why you would ever expect a final theory to be like that…essentially, this comes out of your “simpler than any other theory that does the same” clause. I might amend it to “more explanatory than any other theory that does the same” or “more predictive than any other theory that does the same”. If your theory has some free parameter, then it’s less (simple/explanatory/predictive) than a theory that tells you what that parameter is. So the only way for it to be final is if no such theory exists.

      I personally think such a theory must indeed always exist, but for reasons that are probably pretty loose. I just have a hard time imagining something about the universe that’s truly inexplicable. If it has any regularities at all, then people can find those regularities and characterize them, resulting in a simpler explanation. That may sound like cheating, but I don’t know if there’s actually any better justification for what we do in science. People act surprised that the universe follows mathematical laws, but I don’t think it’s surprising at all: anything with regularities is governed by some laws, that’s what regularities are.

      None of this means we can get to a final theory any time soon, of course. I think on some level I agree with ohwilleke, that we may simply be so embarrassingly far from a final theory that we should accept whatever arbitrary parameters we find in between. But if we could survive and do physics forever, I do think there’s a final theory we would eventually reach, and I don’t think it will have any free parameters.

      Like

      Reply
      1. Manish Oza

        Thanks for this. It’s really interesting to me to hear about these kinds of intuitions among physicists, given how far they are from how philosophers tend to think about these things. To me, it seems like explanation could just run out at some point. Like, the final theory could include some free parameters whose value is just contingent, not explained by anything deeper. But I can see why that would be unsatisfying.

        What you said at the end reminds me of a distinction Kant makes near the end of the Critique of Pure Reason. There are some things we can know to apply to reality (‘constitutive principles’) and other things we can’t know to apply, but which are a useful guide in scientific theorizing (‘regulative principles’). For Kant, that the world is unified under a perfectly simple and complete set of laws (in other words: that a ‘final theory’ in your sense exists) is not something we can ever know. It’s possible that there might be no such thing. But it’s a useful guide, because it means we should always seek simpler and more complete laws than we’ve found so far.

        Of course, it’s not so clear why it would even be a useful guide if we don’t believe (as Kant did) that the laws of nature are in some sense imposed by the mind.

        Like

        Reply
  6. Sabine Hossenfelder

    As several other commenters above, I also can’t quite make sense of your point 1. Can you write down any equation that does not contain a dimensionless parameter? Does a function like cos(x) contain a dimensionless parameter? You could argue no, because you can define the trigonometric functions algebraically. Then again isn’t it actually cos(1*x)? So, it doesn’t seem well-defined to me.

    Also, as Jacques already mentioned, the CC is fine-tuned (in that particular way) and there is de facto nothing problematic with it. You chose it to be some constant, done. What does this mean? It means that we already know that nature is not natural in the technical sense.

    Then again I’m not sure it matters because I don’t see why the existence of a final theory cannot raise an appearance of finetuning in the IR. The point being that without actually having the final theory, you would not know exactly which trajectory you are on.

    You dismiss arguments from probability, but you need those to quantify why you presumably have a problem. You would notice this if you tried to explain what prevents you from assuming that the threshold contributions just happen to cancel. Of course they could, right? Why not? Ah, because that doesn’t seem likely. Now quantify “not likely”.

    Finally let me add that since some people have made a habit of intentionally misunderstanding me, naturalness is a good criterion in some cases. But the predictions for new physics at the LHC isn’t one of those cases. That doesn’t mean there cannot be new physics. It just means there is no good reason to think there will be. /endDisclaimer

    Really the simplest way to see that this is not a good prediction is to notice that there is no problem with making any predictions for LHC energies with the SM the way it is. I explained some other problems with naturalness arguments here https://arxiv.org/abs/1801.02176. Or read my book.

    (Note: I will not subscribe to this thread and probably not read replies – too much clutter in my inbox already. If you want to continue this exchange, Google will tell you my email address.)

    Like

    Reply
    1. 4gravitonsandagradstudent Post author

      Ok, less for Sabine since she, as stated, won’t read this, and more for other readers:

      I think I’ve mostly clarified why things like the “1” in “cos(1*x)” isn’t a parameter in my responses to other comments. The question isn’t about parameters involved in writing down your theory in some formalism, but about whether you can state that your theory is “the unique theory that does X”. (And yes you can smuggle parameters in to X instead, but despite the potential subjectivity here I think it’s not so hard to see that a given X has no wiggle room of this type. “The unique consistent theory of gravity”, if that actually was unique, would be a pretty clear example of such an X.)

      As for whether threshold contributions could just happen to cancel, my whole point here is that nothing can “just happen” to do anything. If you accept point 1, then if they cancel it’s because some final theory makes them cancel. Then it’s just a question of whether a final theory that does that can actually exist. People who believe the naturalness argument works generally think that such a final theory can’t exist–not merely that it’s unlikely, but that it actually can’t, based on what they think they know about the forms such final theories can take. The only role probability has, as I mention in my post, is in qualifying whether these people actually know enough about the subject to make that judgement. That’s not a probability distribution on possible worlds, or laws of physics, or anything so exotic, it’s just an estimate of the scope of human knowledge.

      As I mention at the end of my post and in my reply to Jacques, I think they probably are wrong, minimally due to the naturalness problem with the CC. I just think they’re probably wrong in an interesting way.

      Like

      Reply
  7. Massimo Sandal (@massimosandal)

    I am a lowly biologist, not a physicist, but I am utterly confused by point 1. «The universe should be ultimately described by a theory with no free dimensionless parameters at all.» But… but why? It seems a bizarre requirement to me. Also, “no free dimensionless parameters” seems to ask for an arbitrary parameter, that is, that the number of parameters = 0. I see no reason for it to be so. I would maybe expect such parameters are few (not numbering in the thousands, so to say), but I don’t see why they should be zero.

    Also, do you mean also things such as the number of visible spatial/temporal dimensions, for example, should arise from the theory without any parameter constraining that?

    Like

    Reply
    1. 4gravitons Post author

      Regarding the number of dimensions, ideally, yes! String theory for example is mathematically forced have eleven dimensions, with the number of large dimensions (maybe what you mean by visible ones) determined by physical dynamics, not an adjustable parameter in the theory itself.

      In general, a better way to phrase assumption 1 might be that every fact should have an explanation. In that framing, any feature of your theory should be in some sense demanded by its logical structure, not something you were free to change.

      (A looser version would be that the base facts should be qualitative, not quantitative…”no action at a distance” not “the squeegle parameter is 9.7”. I think some proponents of naturalness believe something like the stricter version, while some believe the looser one.)

      And yeah, this is not self-evident, that’s why it’s an assumption! I do think it’s not a strange thing for a scientist to assume though. Think about it as a biologist: if you discover that every mammal has roughly 1/6 of its body mass in its heart, you want an explanation for that, you don’t just shrug and say that’s what the number is and that’s that. Of course, biology is different from physics in an important way which makes this complicated.

      Like

      Reply

Leave a comment