Did the South Pole Telescope Just Rule Out Neutrino Masses? Not Exactly, Followed by My Speculations

Recently, the South Pole Telescope’s SPT-3G collaboration released new measurements of the cosmic microwave background, the leftover light from the formation of the first atoms. By measuring this light, cosmologists can infer the early universe’s “shape”: how it rippled on different scales as it expanded into the universe we know today. They compare this shape to mathematical models, equations and simulations which tie together everything we know about gravity and matter, and try to see what it implies for those models’ biggest unknowns.

Some of the most interesting such unknowns are neutrino masses. We know that neutrinos have mass because they transform as they move, from one type of neutrino to another. Those transformations let physicists measure the differences between neutrino masses, but but themselves, they don’t say what the actual masses are. All we know from particle physics, at this point, is a minimum: in order for the neutrinos to differ in mass enough to transform in the way they do, the total mass of the three flavors of neutrino must be at least 0.06 electron-Volts.

(Divided by the speed of light squared to get the right units, if you’re picky about that sort of thing. Physicists aren’t.)

Neutrinos also influenced the early universe, shaping it in a noticeably different way than heavier particles that bind together into atoms, like electrons and protons, did. That effect, observed in the cosmic microwave background and in the distribution of galaxies in the universe today, lets cosmologists calculate a maximum: if neutrinos are more massive than a certain threshold, they could not have the effects cosmologists observe.

Over time as measurements improved, this maximum has decreased. Now, the South Pole Telescope has added more data to the pool, and combining it with prior measurements…well, I’ll quote their paper:

Ok, it’s probably pretty hard to understand what that means if you’re not a physicist. To explain:

  1. There are two different hypotheses for how neutrino masses work, called “hierarchies”. In the “normal” hierarchy, the neutrinos go in the same order as the particles they interact with with the weak nuclear force: electron-neutrinos are lighter than muon neutrinos, which are lighter than tau neutrinos. In the “inverted” hierarchy, they come in the opposite order, and the electron neutrino is the heaviest. Both of these are consistent with the particle-physics data.
  2. Confidence is a statistics thing, which could take a lot of unpacking to define correctly. To give a short but likely tortured-sounding explanation: when you rule out a hypothesis with a certain confidence level, you’re saying that, if that hypothesis was true, there would only be a 100%-minus-that-chance chance that you would see what you actually observed.

So, what are the folks at the South Pole Telescope saying? They’re saying that if you put all the evidence together (that’s roughly what that pile of acroynms at the beginning means), then the result would be incredibly uncharacteristic for either hypothesis for neutrino masses. If you had “normal” neutrino masses, you’d only see these cosmological observations 2.1% of the time. And if you had inverted neutrino masses instead, you’d only see these observations 0.01% of the time!

That sure makes it sound like neither hypothesis is correct, right? Does it actually mean that?

I mean, it could! But I don’t think so. Here I’ll start speculating on the possibilities, from least likely in my opinion to most likely. This is mostly my bias talking, and shouldn’t be taken too seriously.

5. Neutrinos are actually massless

This one is really unlikely. The evidence from particle physics isn’t just quantitative, but qualitative. I don’t know if it’s possible to write down a model that reproduces the results of neutrino oscillation experiments without massive neutrinos, and if it is it would be a very bizarre model that would almost certainly break something else. This is essentially a non-starter.

4. This is a sign of interesting new physics

I mean, it would be nice, right? I’m sure there are many proposals at this point, tweaks that add a few extra fields with some hard-to-notice effects to explain the inconsistency. I can’t rule this out, and unlike the last point there isn’t anything about it that seems impossible. But we’ve had a lot of odd observations, and so far this hasn’t happened.

3. Someone did statistics wrong

This happens more often. Any argument like this is a statistical argument, and while physicists keep getting better at statistics, they’re not professional statisticians. Sometimes there’s a genuine misunderstanding that goes in to testing a model, and once it gets resolved the problem goes away.

2. The issue will go away with more data

The problem could also just…go away. 97.9% confidence sounds huge…but in physics, the standards are higher: you need 99.99994% to announce a new discovery. Physicists do a lot of experiments and observations, and sometimes, they see weird things! As the measurements get more precise, we may well see the disagreement melt away, and cosmology and particle physics both point to the same range for neutrino masses. It’s happened to many other measurements before.

1. We’re reaching the limits of our current approach to cosmology

This is probably not actually the most likely possibility, but it’s my list, what are you going to do?

There are basic assumptions behind how most theoretical physicists do cosmology. These assumptions are reasonably plausible, and seem to be needed to do anything at all. But they can be relaxed. Our universe looks like it’s homogeneous on the largest scales: the same density on average, in every direction you look. But the way that gets enforced in the mathematical models is very direct, and it may be that a different, more indirect, approach has more flexibility. I’ll probably be writing about this more in future, hopefully somewhere journalistic. But there are some very cool ideas floating around, gradually getting fleshed out more and more. It may be that the answer to many of the mysteries of cosmology right now is not new physics, but new mathematics: a new approach to modeling the universe.

9 thoughts on “Did the South Pole Telescope Just Rule Out Neutrino Masses? Not Exactly, Followed by My Speculations

  1. Volodymyr Krasnoholovets's avatarVolodymyr Krasnoholovets

    There are several important points that physicists miss in their analysis of the Universe and neutrino mass.

    1) It is known that Friedmann-Lemaître-Robertson-Walker metric is a mathematical tool used to describe the geometry of a homogeneous and isotropic universe. It is considered as the basic instrument in cosmology that provides the universe’s expansion and evolution. However, nobody wishes to examine the paper of Friedmann of 1922; in that work he introduced a time component to the spatial part of the metric by hands. Of course, this resulted in a new equations that were interesting from teh mathematical point of view, but such an action did any physical meaning. For example, one may put time in the velocity factor in a classical wave equation, v —> v(t). Then we obtained a new equation, it also looks like a conditional wave equation, but now v^2 becomes a variable parameter and its solutions must be very different from the standard wave equation. Does such an approach to the consideration of the wave equation has any physical meaning? Obviously, no physical sense in such an action. Therefore, why should we anticipate any physical sense in the solution of Friedmann’s equation?
    Regarding the Cosmic Microwave Background, the correct explanation of this phenomenon was done by Dr. Pierre-Marie Robitaille; he says that CMB is an average temperature of the universe. Probably it is correct explanation because the universe has never known such history as Big Bang.
    2) Neutrino. First of all researchers should understand what it is, where the neutrino is coming from, and in what way it appears. Nevertheless, these important questions are outside of interest of the researcher, which is really strange as for me. All this was explained in my work entitled “Direct derivation the neutrino mass.” In this work the neutrino mass was calculated in the linear approximation (some other factors can influence to reduce the calculated value a few time of course, but the correct calculation should be done in tight collaboration with experimentalists, however, they do not show interest to work together).

    Like

    Reply
  2. Andrew Oh-Willeke's avatarAndrew Oh-Willeke

    A really important subset of #3 is an underestimation of systemic error, either by underestimating the magnitude of a known systemic error, or by omitting a source of systemic error that was not considered. This is probably the single most innocuous way that you can get the statistical significance wrong. A very modest and understandable underestimation of systemic error in absolute terms can translate into a very big difference in statistical significance.

    Systemic uncertainties in astronomy data are in a whole different league than systemic uncertainties in Earth based physics experiments. The standard measure of uncertainty in astronomy observations is the “dex” which is the base ten log of the error. A slight underestimation of the dex of the uncertainty in an astronomy observation can make a big deal in the statistical significance of a final conclusion.

    Another important subset of #3 is that there is ample empirical evidence to indicate that the distribution of errors in almost all disciplines of physics is not a Gaussian normal distribution and is instead a fat tailed t-distribution (electroweak precision measurements sometimes go the other way with thinner tails than there should be).

    The kludge that is the accepted way of dealing with this in physics is to insist on 2 sigma thresholds for statistical significance after considering look elsewhere effects and 5 sigma thresholds for a “discovery” even though 2 sigma and 5 sigma events actually happen much more often than Gaussian statistics and probability says that they should. This kludge is mostly “good enough for government work”, but in cases where the true probability distribution of the uncertainties is a t-distribution with very fat tails, it is still problematic. (Ignoring look elsewhere effects is another common source of overstated statistical significance in many cases, although I don’t know that this is a big factor in this particular case.)

    A third important subset of #3 is that most of the work uses Bayesian statistics rather than frequentist statistics (which in the abstract is a legitimate and reasonable choice). This means that the end result is contingent upon an expected probability distribution imposed before doing to statistical analysis called a “prior” (ideally, a fairly uninformative prior). The whole point of Bayesian statistics, however, is to incorporate what you already know from other unrelated data from prior experiments and/or theory, like the 0.06 eV lower bound on the sum of the three neutrino masses from neutrino oscillation observations and the rule that a neutrino can’t have negative mass. Doing Bayesian statistics with a prior that doesn’t reflect all of the information you already know to a great level of certainty is doing it wrong.

    If you were really following the Bayesian creed to the fullest, you’d also factor in the weak (roughly two sigma) preference for a normal neutrino hierarchy over an inverted neutrino hierarchy from other data sources into your prior as well.

    Fourth, while it is not exactly within the scope of #3, because it is about data quality rather than statistical analysis, it is fairly well known that there is at least one serious outlier in the DESI dataset at one particular, fairly low redshift level, and possibly a couple of others, that are probably outright astronomy observation or data coding mistakes, which confound the accuracy of any conclusions drawn that relies upon the data containing the outlier(s). Unless the people doing the analysis removes the outlier(s), the conclusions reached have a pall of doubt over them. See https://arxiv.org/abs/2412.01740

    Finally, #1 is an issue that I think you are unduly downplaying. The LambdaCDM model itself is in deep trouble with multiple very serious tensions with observations at both the cosmology and the galaxy scale and galaxy cluster level. The impossible early galaxy problem has recently been confirmed with the JWST. DESI itself strongly disfavors the simple constant cosmological constant hypothesis and contributes to the larger Hubble tension problem. As another example, the dark matter phenomena observed in galaxies and galaxy clusters definitely doesn’t match the simple, single type, sterile dark matter hypothesis embedded in LambdaCDM – the dark matter hypothesis may be correct, but if there are dark matter particles, they differ materially from what the basic LambdaCDM model assumes.

    As another example, the assumption that “our universe looks like it’s homogeneous on the largest scales: the same density on average, in every direction you look,” turns out to be pretty definitively not true at the largest possible scales based upon the latest astronomy observations, even though it comes somewhat close to that and there are lots of things that could bias those observations (such as the possibility the local deviation of homogeneity is biasing our observations).

    There are about 30 different independent problems with matching LambdaCDM to astronomy observations and really the only reason that it remains the paradigm is that nobody’s been able to gather a consensus around a single alternative.

    None of this is to throw serious shade at the creators of the LambdaCDM model. It does pretty well at describing the universe at a cosmology scale with six parameters in a way that is fairly consistent with the data at the time it was created and really for quite a while after that (perhaps until ten years ago or so).

    But even its creators knew perfectly well that they were ignoring a lot of potentially relevant, well established physics to come up with this admittedly approximate model. (For example, LambdaCDM basically ignores astrophysical electromagnetic fields, which we are increasingly learning are important in certain processes that influence how matter is distributed in the universe and when stars develop.) There are without a doubt more than six parameters that are necessary to describe the universe at high precision (some of which are considered in extensions of the model, like neutrino mass), and since they didn’t have high precision observations (especially at high redshift), they weren’t too worried about ignoring a lot of smaller magnitude issues in order to devise a tool that they could use in the meantime.

    Neutrino mass isn’t even part of the base LambdaCDM model. It’s an optional extension of it, and it makes a variety of assumptions (like the assumption that astrophysical neutrinos are present in equal proportions of the three neutrino flavors, that the latest IceCube data disfavors), that aren’t terribly rigorous even within a framework that assumes no new gravitational physics and no new particles and forces other than a single almost sterile dark matter particle. They are rough estimates.

    Now, it turns out that cosmological estimates of the maximum sum of the neutrino masses are pretty robust to tweaking those assumptions. But, the lack of great rigor and experimental support for the details of the massive neutrino extension of LambdaCDM model suggest that there is quite a bit of theory/model based uncertainty in the estimated maximum neutrino mass calculations that isn’t being reflected in the systemic error estimates for that number. This makes an apparent contradiction between astronomy based upper bounds on the neutrino masses and lower bounds based upon neutrino oscillation data a lot less significant statistically than claimed.

    Like

    Reply
    1. 4gravitons's avatar4gravitons Post author

      I’m curious what you think about Daniel Green’s response to my post here. Like you he thinks that if the issue is an error then it is best viewed as systematic, not statistical, but he seems to think that when analyzed in detail this favors option 4. over options 2. and 3.

      I do see some merit in ignoring constraints from other experiments when formulating Bayesian priors. Bayesian probability isn’t just for modeling your own degree of belief, it’s for modeling the degree of belief of an arbitrary agent. And for a given observation or given type of observation, it can make sense to imagine an agent restricted to that form of evidence, especially if the other sources of evidence are a bit outside of one’s expertise. Ultimately you want to draw conclusions based on something more like a “global fit”, but if you’re hunting down systematic errors or trying to see where new theoretical ideas could be useful, it makes sense to chop up the evidence you have access to and just base your prior on part of it.

      Like

      Reply
      1. Andrew Oh-Willeke's avatarAndrew Oh-Willeke

        “I’m curious what you think about Daniel Green’s response to my post here.”

        I’m not sure that your characterization that Green prefers a new physics explanation as opposed to the authors of the paper he cites (which he cites for its list of possible explanations) is right. This said, to paraphrase Asimov, “new physics is the last resort of the incompetent (and those who want citations and clicks).”

        Like

        Reply
  3. Vincent's avatarVincent

    The way the article is built up, I expected you were going to say that the cosmologists now lowered the maximum to the point that it is lower than the minimum given by particle physics. Is this indeed what happened (with the given amount of confidence) or not at all?

    Liked by 1 person

    Reply
      1. JollyJoker's avatarJollyJoker

        This is a case where a pic showing the min and max before and after (I guess both normal and inverted) would make matters absolutely clear. You could also see at a glance how large the differences and mismatch are.

        Like

        Reply
    1. Andrew Oh-Willeke's avatarAndrew Oh-Willeke

      “cosmologists now lowered the maximum to the point that it is lower than the minimum given by particle physics.”

      Some cosmologists given some very specific model assumptions. Other cosmologists reject some of those assumptions.

      Like

      Reply

Leave a reply to 4gravitons Cancel reply