# Digging for Buried Insight

The scientific method, as we usually learn it, starts with a hypothesis. The scientist begins with a guess, and asks a question with a clear answer: true, or false? That guess lets them design an experiment, observe the consequences, and improve our knowledge of the world.

But where did the scientist get the hypothesis in the first place? Often, through some form of exploratory research.

Exploratory research is research done, not to answer a precise question, but to find interesting questions to ask. Each field has their own approach to exploration. A psychologist might start with interviews, asking broad questions to find narrower questions for a future survey. An ecologist might film an animal, looking for changes in its behavior. A chemist might measure many properties of a new material, seeing if any stand out. Each approach is like digging for treasure, not sure of exactly what you will find.

Mathematicians and theoretical physicists don’t do experiments, but we still need hypotheses. We need an idea of what we plan to prove, or what kind of theory we want to build: like other scientists, we want to ask a question with a clear, true/false answer. And to find those questions, we still do exploratory research.

What does exploratory research look like, in the theoretical world? Often, it begins with examples and calculations. We can start with a known method, or a guess at a new one, a recipe for doing some specific kind of calculation. Recipe in hand, we proceed to do the same kind of calculation for a few different examples, covering different sorts of situation. Along the way, we notice patterns: maybe the same steps happen over and over, or the result always has some feature.

We can then ask, do those same steps always happen? Does the result really always have that feature? We have our guess, our hypothesis, and our attempt to prove it is much like an experiment. If we find a proof, our hypothesis was true. On the other hand, we might not be able to find a proof. Instead, exploring, we might find a counterexample – one where the steps don’t occur, the feature doesn’t show up. That’s one way to learn that our hypothesis was false.

This kind of exploration is essential to discovery. As scientists, we all have to eventually ask clear yes/no questions, to submit our beliefs to clear tests. But we can’t start with those questions. We have to dig around first, to observe the world without a clear plan, to get to a point where we have a good question to ask.

# Who Is, and Isn’t, Counting Angels on a Pinhead

How many angels can dance on the head of a pin?

It’s a question famous for its sheer pointlessness. While probably no-one ever had that exact debate, “how many angels fit on a pin” has become a metaphor, first for a host of old theology debates that went nowhere, and later for any academic study that seems like a waste of time. Occasionally, physicists get accused of doing this: typically string theorists, but also people who debate interpretations of quantum mechanics.

Are those accusations fair? Sometimes yes, sometimes no. In order to tell the difference, we should think about what’s wrong, exactly, with counting angels on the head of a pin.

One obvious answer is that knowing the number of angels that fit on a needle’s point is useless. Wikipedia suggests that was the origin of the metaphor in the first place, a pun on “needle’s point” and “needless point”. But this answer is a little too simple, because this would still be a useful debate if angels were real and we could interact with them. “How many angels fit on the head of a pin” is really a question about whether angels take up space, whether two angels can be at the same place at the same time. Asking that question about particles led physicists to bosons and fermions, which among other things led us to invent the laser. If angelology worked, perhaps we would have angel lasers as well.

“If angelology worked” is key here, though. Angelology didn’t work, it didn’t lead to angel-based technology. And while Medieval people couldn’t have known that for certain, maybe they could have guessed. When people accuse academics of “counting angels on the head of a pin”, they’re saying they should be able to guess that their work is destined for uselessness.

How do you guess something like that?

Well, one problem with counting angels is that nobody doing the counting had ever seen an angel. Counting angels on the head of a pin implies debating something you can’t test or observe. That can steer you off-course pretty easily, into conclusions that are either useless or just plain wrong.

This can’t be the whole of the problem though, because of mathematics. We rarely accuse mathematicians of counting angels on the head of a pin, but the whole point of math is to proceed by pure logic, without an experiment in sight. Mathematical conclusions can sometimes be useless (though we can never be sure, some ideas are just ahead of their time), but we don’t expect them to be wrong.

The key difference is that mathematics has clear rules. When two mathematicians disagree, they can look at the details of their arguments, make sure every definition is as clear as possible, and discover which one made a mistake. Working this way, what they build is reliable. Even if it isn’t useful yet, the result is still true, and so may well be useful later.

In contrast, when you imagine Medieval monks debating angels, you probably don’t imagine them with clear rules. They might quote contradictory bible passages, argue everyday meanings of words, and win based more on who was poetic and authoritative than who really won the argument. Picturing a debate over how many angels can fit on the head of a pin, it seems more like Calvinball than like mathematics.

This then, is the heart of the accusation. Saying someone is just debating how many angels can dance on a pin isn’t merely saying they’re debating the invisible. It’s saying they’re debating in a way that won’t go anywhere, a debate without solid basis or reliable conclusions. It’s saying, not just that the debate is useless now, but that it will likely always be useless.

As an outsider, you can’t just dismiss a field because it can’t do experiments. What you can and should do, is dismiss a field that can’t produce reliable knowledge. This can be hard to judge, but a key sign is to look for these kinds of Calvinball-style debates. Do people in the field seem to argue the same things with each other, over and over? Or do they make progress and open up new questions? Do the people talking seem to be just the famous ones? Or are there cases of young and unknown researchers who happen upon something important enough to make an impact? Do people just list prior work in order to state their counter-arguments? Or do they build on it, finding consequences of others’ trusted conclusions?

A few corners of string theory do have this Calvinball feel, as do a few of the debates about the fundamentals of quantum mechanics. But if you look past the headlines and blogs, most of each of these fields seems more reliable. Rather than interminable back-and-forth about angels and pinheads, these fields are quietly accumulating results that, one way or another, will give people something to build on.

A reader pointed me to Stephen Wolfram’s one-year update of his proposal for a unified theory of physics. I was pretty squeamish about it one year ago, and now I’m even less interested in wading in to the topic. But I thought it would be worth saying something, and rather than say something specific, I realized I could say something general. I thought I’d talk a bit about how we judge good and bad research in theoretical physics.

In science, there are two things we want out of a new result: we want it to be true, and we want it to be surprising. The first condition should be obvious, but the second is also important. There’s no reason to do an experiment or calculation if it will just tell us something we already know. We do science in the hope of learning something new, and that means that the best results are the ones we didn’t expect.

(What about replications? We’ll get there.)

If you’re judging an experiment, you can measure both of these things with statistics. Statistics lets you estimate how likely an experiment’s conclusion is to be true: was there a large enough sample? Strong enough evidence? It also lets you judge how surprising the experiment is, by estimating how likely it would be to happen given what was known beforehand. Did existing theories and earlier experiments make the result seem likely, or unlikely? While you might not have considered replications surprising, from this perspective they can be: if a prior experiment seems unreliable, successfully replicating it can itself be a surprising result.

If instead you’re judging a theoretical result, these measures get more subtle. There aren’t always good statistical tools to test them. Nonetheless, you don’t have to rely on vague intuitions either. You can be fairly precise, both about how true a result is and how surprising it is.

We get our results in theoretical physics through mathematical methods. Sometimes, this is an actual mathematical proof: guaranteed to be true, no statistics needed. Sometimes, it resembles a proof, but falls short: vague definitions and unstated assumptions mar the argument, making it less likely to be true. Sometimes, the result uses an approximation. In those cases we do get to use some statistics, estimating how good the approximation may be. Finally, a result can’t be true if it contradicts something we already know. This could be a logical contradiction in the result itself, but if the result is meant to describe reality (note: not always the case), it might contradict the results of a prior experiment.

What makes a theoretical result surprising? And how precise can we be about that surprise?

Theoretical results can be surprising in the light of earlier theory. Sometimes, this gets made precise by a no-go theorem, a proof that some kind of theoretical result is impossible to obtain. If a result finds a loophole in a no-go theorem, that can be quite surprising. Other times, a result is surprising because it’s something no-one else was able to do. To be precise about that kind of surprise, you need to show that the result is something others wanted to do, but couldn’t. Maybe someone else made a conjecture, and only you were able to prove it. Maybe others did approximate calculations, and now you can do them more precisely. Maybe a question was controversial, with different people arguing for different sides, and you have a more conclusive argument. This is one of the better reasons to include a long list of references in a paper: not to pad your friends’ citation counts, but to show that your accomplishment is surprising: that others might have wanted to achieve it, but had to settle for something lesser.

In general, this means that showing whether a theoretical result is good: not merely true, but surprising and new, links you up to the rest of the theoretical community. You can put in all the work you like on a theory of everything, and make it as rigorous as possible, but if all you did was reproduce a sub-case of someone else’s theory then you haven’t accomplished all that much. If you put your work in context, compare and contrast to what others have done before, then we can start getting precise about how much we should be surprised, and get an idea of what your result is really worth.

# Reality as an Algebra of Observables

Listen to a physicist talk about quantum mechanics, and you’ll hear the word “observable”. Observables are, intuitively enough, things that can be observed. They’re properties that, in principle, one could measure in an experiment, like the position of a particle or its momentum. They’re the kinds of things linked by uncertainty principles, where the better you know one, the worse you know the other.

Some physicists get frustrated by this focus on measurements alone. They think we ought to treat quantum mechanics, not like a black box that produces results, but as information about some underlying reality. Instead of just observables, they want us to look for “beables“: not just things that can be observed, but things that something can be. From their perspective, the way other physicists focus on observables feels like giving up, like those physicists are abandoning their sacred duty to understand the world. Others, like the Quantum Bayesians or QBists, disagree, arguing that quantum mechanics really is, and ought to be, a theory of how individuals get evidence about the world.

I’m not really going to weigh in on that debate, I still don’t feel like I know enough to even write a decent summary. But I do think that one of the instincts on the “beables” side is wrong. If we focus on observables in quantum mechanics, I don’t think we’re doing anything all that unusual. Even in other parts of physics, we can think about reality purely in terms of observations. Doing so isn’t a dereliction of duty: often, it’s the most useful way to understand the world.

When we try to comprehend the world, we always start alone. From our time in the womb, we have only our senses and emotions to go on. With a combination of instinct and inference we start assembling a consistent picture of reality. Philosophers called phenomenologists (not to be confused with the physicists called phenomenologists) study this process in detail, trying to characterize how different things present themselves to an individual consciousness.

For my point here, these details don’t matter so much. That’s because in practice, we aren’t alone in understanding the world. Based on what others say about the world, we conclude they perceive much like we do, and we learn by their observations just as we learn by our own. We can make things abstract: instead of the specifics of how individuals perceive, we think about groups of scientists making measurements. At the end of this train lie observables: things that we as a community could in principle learn, and share with each other, ignoring the details of how exactly we measure them.

If each of these observables was unrelated, just scattered points of data, then we couldn’t learn much. Luckily, they are related. In quantum mechanics, some of these relationships are the uncertainty principles I mentioned earlier. Others relate measurements at different places, or at different times. The fancy way to refer to all these relationships is as an algebra: loosely, it’s something you can “do algebra with”, like you did with numbers and variables in high school. When physicists and mathematicians want to do quantum mechanics or quantum field theory seriously, they often talk about an “algebra of observables”, a formal way of thinking about all of these relationships.

Focusing on those two things, observables and how they are related, isn’t just useful in the quantum world. It’s an important way to think in other areas of physics too. If you’ve heard people talk about relativity, the focus on measurement screams out, in thought experiments full of abstract clocks and abstract yardsticks. Without this discipline, you find paradoxes, only to resolve them when you carefully track what each person can observe. More recently, physicists in my field have had success computing the chance particles collide by focusing on the end result, the actual measurements people can make, ignoring what might happen in between to cause that measurement. We can then break measurements down into simpler measurements, or use the structure of simpler measurements to guess more complicated ones. While we typically have done this in quantum theories, that’s not really a limitation: the same techniques make sense for problems in classical physics, like computing the gravitational waves emitted by colliding black holes.

With this in mind, we really can think of reality in those terms: not as a set of beable objects, but as a set of observable facts, linked together in an algebra of observables. Paring things down to what we can know in this way is more honest, and it’s also more powerful and useful. Far from a betrayal of physics, it’s the best advantage we physicists have in our quest to understand the world.

# A Tale of Two Donuts

I’ve got a new paper up this week, with Hjalte Frellesvig, Cristian Vergu, and Matthias Volk, about the elliptic integrals that show up in Feynman diagrams.

You can think of elliptic integrals as integrals over a torus, a curve shaped like the outer crust of a donut.

Integrals like these are showing up more and more in our field, the subject of bigger and bigger conferences. By now, we think we have a pretty good idea of how to handle them, but there are still some outstanding mysteries to solve.

One such mystery came up in a paper in 2017, by Luise Adams and Stefan Weinzierl. They were working with one of the favorite examples of this community, the so-called sunrise diagram (sunrise being a good time to eat donuts). And they noticed something surprising: if they looked at the sunrise diagram in different ways, it was described by different donuts.

What do I mean, different donuts?

The integrals we know best in this field aren’t integrals on a torus, but rather integrals on a sphere. In some sense, all spheres are the same: you can make them bigger or smaller, but they don’t have different shapes, they’re all “sphere-shaped”. In contrast, integrals on a torus are trickier, because toruses can have different shapes. Think about different donuts: some might have a thin ring, others a thicker one, even if the overall donut is the same size. You can’t just scale up one donut and get the other.

My colleague, Cristian Vergu, was annoyed by this. He’s the kind of person who trusts mathematics like an old friend, one who would never lead him astray. He thought that there must be one answer, one correct donut, one natural way to represent the sunrise diagram mathematically. I was skeptical, I don’t trust mathematics nearly as much as Cristian does. To sort it out, we brought in Hjalte Frellesvig and Matthias Volk, and started trying to write the sunrise diagram every way we possibly could. (Along the way, we threw in another “donut diagram”, the double-box, just to see what would happen.)

Rather than getting a zoo of different donuts, we got a surprise: we kept seeing the same two. And in the end, we stumbled upon the answer Cristian was hoping for: one of these two is, in a meaningful sense, the “correct donut”.

What was wrong with the other donut? It turns out when the original two donuts were found, one of them involved a move that is a bit risky mathematically, namely, combining square roots.

For readers who don’t know what I mean, or why this is risky, let me give a simple example. Everyone else can skip to after the torus gif.

Suppose I am solving a problem, and I find a product of two square roots:

$\sqrt{x}\sqrt{x}$

I could try combining them under the same square root sign, like so:

$\sqrt{x^2}$

That works, if $x$ is positive. But now suppose $x=-1$. Plug in negative one to the first expression, and you get,

$\sqrt{-1}\sqrt{-1}=i\times i=-1$

while in the second,

$\sqrt{(-1)^2}=\sqrt{1}=1$

In this case, it wasn’t as obvious that combining roots would change the donut. It might have been perfectly safe. It took some work to show that indeed, this was the root of the problem. If the roots are instead combined more carefully, then one of the donuts goes away, leaving only the one, true donut.

I’m interested in seeing where this goes, how many different donuts we have to understand and how they might be related. But I’ve also been writing about donuts for the last hour or so, so I’m getting hungry. See you next week!

# Physical Intuition From Physics Experience

One of the most mysterious powers physicists claim is physical intuition. Let the mathematicians have their rigorous proofs and careful calculations. We just need to ask ourselves, “Does this make sense physically?”

It’s tempting to chalk this up to bluster, or physicist arrogance. Sometimes, though, a physicist manages to figure out something that stumps the mathematicians. Edward Witten’s work on knot theory is a classic example, where he used ideas from physics, not rigorous proof, to win one of mathematics’ highest honors.

So what is physical intuition? And what is its relationship to proof?

Let me walk you through an example. I recently saw a talk by someone in my field who might be a master of physical intuition. He was trying to learn about what we call Effective Field Theories, theories that are “effectively” true at some energy but don’t include the details of higher-energy particles. He calculated that there are limits to the effect these higher-energy particles can have, just based on simple cause and effect. To explain the calculation to us, he gave a physical example, of coupled oscillators.

Oscillators are familiar problems for first-year physics students. Objects that go back and forth, like springs and pendulums, tend to obey similar equations. Link two of them together (couple them), and the equations get more complicated, work for a second-year student instead of a first-year one. Such a student will notice that coupled oscillators “repel” each other: their frequencies get father apart than they would be if they weren’t coupled.

Our seminar speaker wanted us to revisit those second-year-student days, in order to understand how different particles behave in Effective Field Theory. Just as the frequencies of the oscillators repel each other, the energies of particles repel each other: the unknown high-energy particles could only push the energies of the lighter particles we can detect lower, not higher.

This is an example of physical intuition. Examine it, and you can learn a few things about how physical intuition works.

First, physical intuition comes from experience. Using physical intuition wasn’t just a matter of imagining the particles and trying to see what “makes sense”. Instead, it required thinking about similar problems from our experience as physicists: problems that don’t just seem similar on the surface, but are mathematically similar.

Finally, physical intuition can be risky. If the problem is too different then the intuition can lead you astray. The mathematics of coupled oscillators and Effective Field Theories was similar enough for this argument to work, but if it turned out to be different in an important way then the intuition would have backfired, making it harder to find the answer and harder to keep track once it was found.

Physical intuition may seem mysterious. But deep down, it’s just physicists using our experience, comparing similar problems to help keep track of what we need to know. I’m sure chemists, biologists, and mathematicians all have similar stories to tell.

# Newtonmas in Uncertain Times

Three hundred and eighty-two years ago today (depending on which calendars you use), Isaac Newton was born. For a scientist, that’s a pretty good reason to celebrate.

Last month, our local nest of science historians at the Niels Bohr Archive hosted a Zoom talk by Jed Z. Buchwald, a Newton scholar at Caltech. Buchwald had a story to tell about experimental uncertainty, one where Newton had an important role.

If you’ve ever had a lab course in school, you know experiments never quite go like they’re supposed to. Set a room of twenty students to find Newton’s constant, and you’ll get forty different answers. Whether you’re reading a ruler or clicking a stopwatch, you can never measure anything with perfect accuracy. Each time you measure, you introduce a little random error.

Textbooks worth of statistical know-how has cropped up over the centuries to compensate for this error and get closer to the truth. The simplest trick though, is just to average over multiple experiments. It’s so obvious a choice, taking a thousand little errors and smoothing them out, that you might think people have been averaging in this way through history.

They haven’t though. As far as Buchwald had found, the first person to average experiments in this way was Isaac Newton.

What did people do before Newton?

Well, what might you do, if you didn’t have a concept of random error? You can still see that each time you measure you get a different result. But you would blame yourself: if you were more careful with the ruler, quicker with the stopwatch, you’d get it right. So you practice, you do the experiment many times, just as you would if you were averaging. But instead of averaging, you just take one result, the one you feel you did carefully enough to count.

Before Newton, this was almost always what scientists did. If you were an astronomer mapping the stars, the positions you published would be the last of a long line of measurements, not an average of the rest. Some other tricks existed. Tycho Brahe for example folded numbers together pair by pair, averaging the first two and then averaging that average with the next one, getting a final result weighted to the later measurements. But, according to Buchwald, Newton was the first to just add everything together.

Even Newton didn’t yet know why this worked. It would take later research, theorems of statistics, to establish the full justification. It seems Newton and his later contemporaries had a vague physics analogy in mind, finding a sort of “center of mass” of different experiments. This doesn’t make much sense – but it worked, well enough for physics as we know it to begin.

So this Newtonmas, let’s thank the scientists of the past. Working piece by piece, concept by concept, they gave use the tools to navigate our uncertain times.

# Congratulations to Roger Penrose, Reinhard Genzel, and Andrea Ghez!

The 2020 Physics Nobel Prize was announced last week, awarded to Roger Penrose for his theorems about black holes and Reinhard Genzel and Andrea Ghez for discovering the black hole at the center of our galaxy.

Of the three, I’m most familiar with Penrose’s work. People had studied black holes before Penrose, but only the simplest of situations, like an imaginary perfectly spherical star. Some wondered whether black holes in nature were limited in this way, if they could only exist under perfectly balanced conditions. Penrose showed that wasn’t true: he proved mathematically that black holes not only can form, they must form, in very general situations. He’s also worked on a wide variety of other things. He came up with “twistor space”, an idea intended for a new theory of quantum gravity that ended up as a useful tool for “amplitudeologists” like me to study particle physics. He discovered a set of four types of tiles such that if you tiled a floor with them the pattern would never repeat. And he has some controversial hypotheses about quantum gravity and consciousness.

I’m less familiar with Genzel and Ghez, but by now everyone should be familiar with what they found. Genzel and Ghez led two teams that peered into the center of our galaxy. By carefully measuring the way stars moved deep in the core, they figured out something we now teach children: that our beloved Milky Way has a dark and chewy center, an enormous black hole around which everything else revolves. These appear to be a common feature of galaxies, and many others have been shown to orbit black holes as well.

Like last year, I find it a bit odd that the Nobel committee decided to lump these two prizes together. Both discoveries concern black holes, so they’re more related than last year’s laureates, but the contexts are quite different: it’s not as if Penrose predicted the black hole in the center of our galaxy. Usually the Nobel committee avoids mathematical work like Penrose’s, except when it’s tied to a particular experimental discovery. It doesn’t look like anyone has gotten a Nobel prize for discovering that black holes exist, so maybe that’s the intent of this one…but Genzel and Ghez were not the first people to find evidence of a black hole. So overall I’m confused. I’d say that Penrose deserved a Nobel Prize, and that Genzel and Ghez did as well, but I’m not sure why they needed to split one with each other.

# At “Antidifferentiation and the Calculation of Feynman Amplitudes”

I was at a conference this week, called Antidifferentiation and the Calculation of Feynman Amplitudes. The conference is a hybrid kind of affair: I attended via Zoom, but there were seven or so people actually there in the room (the room in question being at DESY Zeuthen, near Berlin).

The road to this conference was a bit of a roller-coaster. It was originally scheduled for early March. When the organizers told us they were postponing it, they seemed at the time a little overcautious…until the world proved me, and all of us, wrong. They rescheduled for October, and as more European countries got their infection rates down it looked like the conference could actually happen. We booked rooms at the DESY guest house, until it turned out they needed the space to keep the DESY staff socially distanced, and we quickly switched to booking at a nearby hotel.

Then Europe’s second wave hit. Cases in Denmark started to rise, so Germany imposed a quarantine on entry from Copenhagen and I switched to remote participation. Most of the rest of the participants did too, even several in Germany. For the few still there in person they have a variety of measures to stop infection, from fixed seats in the conference room to gloves for the coffee machine.

The content has been interesting. It’s an eclectic mix of review talks and talks on recent research, all focused on different ways to integrate (or, as one of the organizers emphasized, antidifferentiate) functions in quantum field theory. I’ve learned about the history of the field, and gotten a better feeling for the bottlenecks in some LHC-relevant calculations.

This week was also the announcement of the Physics Nobel Prize. I’ll do my traditional post on it next week, but for now, congratulations to Penrose, Genzel, and Ghez!

# Which Things Exist in Quantum Field Theory

If you ever think metaphysics is easy, learn a little quantum field theory.

Someone asked me recently about virtual particles. When talking to the public, physicists sometimes explain the behavior of quantum fields with what they call “virtual particles”. They’ll describe forces coming from virtual particles going back and forth, or a bubbling sea of virtual particles and anti-particles popping out of empty space.

The thing is, this is a metaphor. What’s more, it’s a metaphor for an approximation. As physicists, when we draw diagrams with more and more virtual particles, we’re trying to use something we know how to calculate with (particles) to understand something tougher to handle (interacting quantum fields). Virtual particles, at least as you’re probably picturing them, don’t really exist.

I don’t really blame physicists for talking like that, though. Virtual particles are a metaphor, sure, a way to talk about a particular calculation. But so is basically anything we can say about quantum field theory. In quantum field theory, it’s pretty tough to say which things “really exist”.

You might have heard that there are three types of neutrinos, corresponding to the three “generations” of the Standard Model: electron-neutrinos, muon-neutrinos, and tau-neutrinos. Each is produced in particular kinds of reactions: electron-neutrinos, for example, get produced by beta-plus decay, when a proton turns into a neutron, an anti-electron, and an electron-neutrino.

Leave these neutrinos alone though, and something strange happens. Detect what you expect to be an electron-neutrino, and it might have changed into a muon-neutrino or a tau-neutrino. The neutrino oscillated.

Why does this happen?

One way to explain it is to say that electron-neutrinos, muon-neutrinos, and tau-neutrinos don’t “really exist”. Instead, what really exists are neutrinos with specific masses. These don’t have catchy names, so let’s just call them neutrino-one, neutrino-two, and neutrino-three. What we think of as electron-neutrinos, muon-neutrinos, and tau-neutrinos are each some mix (a quantum superposition) of these “really existing” neutrinos, specifically the mixes that interact nicely with electrons, muons, and tau leptons respectively. When you let them travel, it’s these neutrinos that do the traveling, and due to quantum effects that I’m not explaining here you end up with a different mix than you started with.

This probably seems like a perfectly reasonable explanation. But it shouldn’t. Because if you take one of these mass-neutrinos, and interact with an electron, or a muon, or a tau, then suddenly it behaves like a mix of the old electron-neutrinos, muon-neutrinos, and tau-neutrinos.

That’s because both explanations are trying to chop the world up in a way that can’t be done consistently. There aren’t electron-neutrinos, muon-neutrinos, and tau-neutrinos, and there aren’t neutrino-ones, neutrino-twos, and neutrino-threes. There’s a mathematical object (a vector space) that can look like either.

Whether you’re comfortable with that depends on whether you think of mathematical objects as “things that exist”. If you aren’t, you’re going to have trouble thinking about the quantum world. Maybe you want to take a step back, and say that at least “fields” should exist. But that still won’t do: we can redefine fields, add them together or even use more complicated functions, and still get the same physics. The kinds of things that exist can’t be like this. Instead you end up invoking another kind of mathematical object, equivalence classes.

If you want to be totally rigorous, you have to go a step further. You end up thinking of physics in a very bare-bones way, as the set of all observations you could perform. Instead of describing the world in terms of “these things” or “those things”, the world is a black box, and all you’re doing is finding patterns in that black box.

Is there a way around this? Maybe. But it requires thought, and serious philosophy. It’s not intuitive, it’s not easy, and it doesn’t lend itself well to 3d animations in documentaries. So in practice, whenever anyone tells you about something in physics, you can be pretty sure it’s a metaphor. Nice describable, non-mathematical things typically don’t exist.