Tag Archives: quantum field theory

Stop Listing the Amplituhedron as a Competitor of String Theory

The Economist recently had an article (paywalled) that meandered through various developments in high-energy physics. It started out talking about the failure of the LHC to find SUSY, argued this looked bad for string theory (which…not really?) and used it as a jumping-off point to talk about various non-string “theories of everything”. Peter Woit quoted it a few posts back as kind of a bellwether for public opinion on supersymmetry and string theory.

The article was a muddle, but a fairly conventional muddle, explaining or mis-explaining things in roughly the same way as other popular physics pieces. For the most part that didn’t bug me, but one piece of the muddle hit a bit close to home:

The names of many of these [non-string theories of everything] do, it must be conceded, torture the English language. They include “causal dynamical triangulation”, “asymptotically safe gravity”, “loop quantum gravity” and the “amplituhedron formulation of quantum theory”.

I’ve posted about the amplituhedron more than a few times here on this blog. Out of every achievement of my sub-field, it has most captured the public imagination. It’s legitimately impressive, a way to translate calculations of probabilities of collisions of fundamental particles (in a toy model, to be clear) into geometrical objects. What it isn’t, and doesn’t pretend to be, is a theory of everything.

To be fair, the Economist piece admits this:

Most attempts at a theory of everything try to fit gravity, which Einstein describes geometrically, into quantum theory, which does not rely on geometry in this way. The amplituhedron approach does the opposite, by suggesting that quantum theory is actually deeply geometric after all. Better yet, the amplituhedron is not founded on notions of spacetime, or even statistical mechanics. Instead, these ideas emerge naturally from it. So, while the amplituhedron approach does not as yet offer a full theory of quantum gravity, it has opened up an intriguing path that may lead to one.

The reasoning they have leading up to it has a few misunderstandings anyway. The amplituhedron is geometrical, but in a completely different way from how Einstein’s theory of gravity is geometrical: Einstein’s gravity is a theory of space and time, the amplituhedron’s magic is that it hides space and time behind a seemingly more fundamental mathematics.

This is not to say that the amplituhedron won’t lead to insights about gravity. That’s a big part of what it’s for, in the long-term. Because the amplituhedron hides the role of space and time, it might show the way to theories that lack them altogether, theories where space and time are just an approximation for a more fundamental reality. That’s a real possibility, though not at this point a reality.

Even if you take this possibility completely seriously, though, there’s another problem with the Economist’s description: it’s not clear that this new theory would be a non-string theory!

The main people behind the amplituhedron are pretty positively disposed to string theory. If you asked them, I think they’d tell you that, rather than replacing string theory, they expect to learn more about string theory: to see how it could be reformulated in a way that yields insight about trickier problems. That’s not at all like the other “non-string theories of everything” in that list, which frame themselves as alternatives to, or even opponents of, string theory.

It is a lot like several other research programs, though, like ER=EPR and It from Qubit. Researchers in those programs try to use physical principles and toy models to say fundamental things about quantum gravity, trying to think about space and time as being made up of entangled quantum objects. By that logic, they belong in that list in the article alongside the amplituhedron. The reason they aren’t is obvious if you know where they come from: ER=EPR and It from Qubit are worked on by string theorists, including some of the most prominent ones.

The thing is, any reason to put the amplituhedron on that list is also a reason to put them. The amplituhedron is not a theory of everything, it is not at present a theory of quantum gravity. It’s a research direction that might shed new insight about quantum gravity. It doesn’t explicitly involve strings, but neither does It from Qubit most of the time. Unless you’re going to describe It from Qubit as a “non-string theory of everything”, you really shouldn’t describe the amplituhedron as one.

The amplituhedron is a really cool idea, one with great potential. It’s not something like loop quantum gravity, or causal dynamical triangulations, and it doesn’t need to be. Let it be what it is, please!

Of Cows and Razors

Last week’s post came up on Reddit, where a commenter made a good point. I said that one of the mysteries of neutrinos is that they might not get their mass from the Higgs boson. This is true, but the commenter rightly points out it’s true of other particles too: electrons might not get their mass from the Higgs. We aren’t sure. The lighter quarks might not get their mass from the Higgs either.

When talking physics with the public, we usually say that electrons and quarks all get their mass from the Higgs. That’s how it works in our Standard Model, after all. But even though we’ve found the Higgs boson, we can’t be 100% sure that it functions the way our model says. That’s because there are aspects of the Higgs we haven’t been able to measure directly. We’ve measured how it affects the heaviest quark, the top quark, but measuring its interactions with other particles will require a bigger collider. Until we have those measurements, the possibility remains open that electrons and quarks get their mass another way. It would be a more complicated way: we know the Higgs does a lot of what the model says, so if it deviates in another way we’d have to add more details, maybe even more undiscovered particles. But it’s possible.

If I wanted to defend the idea that neutrinos are special here, I would point out that neutrino masses, unlike electron masses, are not part of the Standard Model. For electrons, we have a clear “default” way for them to get mass, and that default is in a meaningful way simpler than the alternatives. For neutrinos, every alternative is complicated in some fashion: either adding undiscovered particles, or unusual properties. If we were to invoke Occam’s Razor, the principle that we should always choose the simplest explanation, then for electrons and quarks there is a clear winner. Not so for neutrinos.

I’m not actually going to make this argument. That’s because I’m a bit wary of using Occam’s Razor when it comes to questions of fundamental physics. Occam’s Razor is a good principle to use, if you have a good idea of what’s “normal”. In physics, you don’t.

To illustrate, I’ll tell an old joke about cows and trains. Here’s the version from The Curious Incident of the Dog in the Night-Time:

There are three men on a train. One of them is an economist and one of them is a logician and one of them is a mathematician. And they have just crossed the border into Scotland (I don’t know why they are going to Scotland) and they see a brown cow standing in a field from the window of the train (and the cow is standing parallel to the train). And the economist says, ‘Look, the cows in Scotland are brown.’ And the logician says, ‘No. There are cows in Scotland of which at least one is brown.’ And the mathematician says, ‘No. There is at least one cow in Scotland, of which one side appears to be brown.’

One side of this cow appears to be very fluffy.

If we want to be as careful as possible, the mathematician’s answer is best. But we expect not to have to be so careful. Maybe the economist’s answer, that Scottish cows are brown, is too broad. But we could imagine an agronomist who states “There is a breed of cows in Scotland that is brown”. And I suggest we should find that pretty reasonable. Essentially, we’re using Occam’s Razor: if we want to explain seeing a brown half-cow from a train, the simplest explanation would be that it’s a member of a breed of cows that are brown. It would be less simple if the cow were unique, a brown mutant in a breed of black and white cows. It would be even less simple if only one side of the cow were brown, and the other were another color.

When we use Occam’s Razor in this way, we’re drawing from our experience of cows. Most of the cows we meet are members of some breed or other, with similar characteristics. We don’t meet many mutant cows, or half-colored cows, so we think of those options as less simple, and less likely.

But what kind of experience tells us which option is simpler for electrons, or neutrinos?

The Standard Model is a type of theory called a Quantum Field Theory. We have experience with other Quantum Field Theories: we use them to describe materials, metals and fluids and so forth. Still, it seems a bit odd to say that if something is typical of these materials, it should also be typical of the universe. As another physicists in my sub-field, Nima Arkani-Hamed, likes to say, “the universe is not a crappy metal!”

We could also draw on our experience from other theories in physics. This is a bit more productive, but has other problems. Our other theories are invariably incomplete, that’s why we come up with new theories in the first place…and with so few theories, compared to breeds of cows, it’s unclear that we really have a good basis for experience.

Physicists like to brag that we study the most fundamental laws of nature. Ordinarily, this doesn’t matter as much as we pretend: there’s a lot to discover in the rest of science too, after all. But here, it really makes a difference. Unlike other fields, we don’t know what’s “normal”, so we can’t really tell which theories are “simpler” than others. We can make aesthetic judgements, on the simplicity of the math or the number of fields or the quality of the stories we can tell. If we want to be principled and forego all of that, then we’re left on an abyss, a world of bare observations and parameter soup.

If a physicist looks out a train window, will they say that all the electrons they see get their mass from the Higgs? Maybe, still. But they should be careful about it.

Lessons From Neutrinos, Part II

Last week I talked about the history of neutrinos. Neutrinos come in three types, or “flavors”. Electron neutrinos are the easiest: they’re produced alongside electrons and positrons in the different types of beta decay. Electrons have more massive cousins, called muon and tau particles. As it turns out, each of these cousins has a corresponding flavor of neutrino: muon neutrinos, and tau neutrinos.

For quite some time, physicists thought that all of these neutrinos had zero mass.

(If the idea of a particle with zero mass confuses you, think about photons. A particle with zero mass travels, like a photon, at the speed of light. This doesn’t make them immune to gravity: just as no light can escape a black hole, neither can any other massless particle. It turns out that once you take into account Einstein’s general theory of relativity, gravity cares about energy, not just mass.)

Eventually, physicists started to realize they were wrong, and neutrinos had a small non-zero mass after all. Their reason why might seem a bit strange, though. Physicists didn’t weigh the neutrinos, or measure their speed. Instead, they observed that different flavors of neutrinos transform into each other. We say that they oscillate: electron neutrinos oscillate into muon or tau neutrinos, which oscillate into the other flavors, and so on. Over time, a beam of electron neutrinos will become a beam of mostly tau and muon neutrinos, before becoming a beam of electron neutrinos again.

That might not sound like it has much to do with mass. To understand why it does, you’ll need to learn this post’s lesson:

Lesson 2: Mass is just How Particles Move

Oscillating particles seem like a weird sort of evidence for mass. What would be a more normal kind of evidence?

Those of you who’ve taken physics classes might remember the equation F=ma. Apply a known force to something, see how much it accelerates, and you can calculate its mass. If you’ve had a bit more physics, you’ll know that this isn’t quite the right equation to use for particles close to the speed of light, but that there are other equations we can use in a similar way. In particular, using relativity, we have E^2=p^2 c^2 + m^2 c^4. (At rest, p=0, and we have the famous E=mc^2). This lets us do the same kind of thing: give something a kick and see how it moves.

So let’s say we do that: we give a particle a kick, and measure it later. I’ll visualize this with a tool physicists use called a Feynman diagram. The line represents a particle traveling from one side to the other, from “kick” to “measurement”:

Because we only measure the particle at the end, we might miss if something happens in between. For example, it might interact with another particle or field, like this:

If we don’t know about this other field, then when we try to measure the particle’s mass we will include interactions like this. As it turns out, this is how the Higgs boson works: the Higgs field interacts with particles like electrons and quarks, changing how they move, so that they appear to have mass.

Quantum particles can do other things too. You might have heard people talk about one particle turning into a pair of temporary “virtual particles”. When people say that, they usually have a diagram in mind like this:

In particle physics, we need to take into account every diagram of this kind, every possible thing that could happen in between “kick” and measurement. The final result isn’t one path or another, but a sum of all the different things that could have happened in between. So when we measure the mass of a particle, we’re including every diagram that’s allowed: everything that starts with our “kick” and ends with our measurement.

Now what if our particle can transform, from one flavor to another?

Now we have a new type of thing that can happen in between “kick” and measurement. And if it can happen once, it can happen more than once:

Remember that, when we measure mass, we’re measuring a sum of all the things that can happen in between. That means our particle could oscillate back and forth between different flavors many many times, and we need to take every possibility into account. Because of that, it doesn’t actually make sense to ask what the mass is for one flavor, for just electron neutrinos or just muon neutrinos. Instead, mass is for the thing that actually moves: an average (actually, a quantum superposition) over all the different flavors, oscillating back and forth any number of times.

When a process like beta decay produces an electron neutrino, the thing that actually moves is a mix (again, a superposition) of particles with these different masses. Because each of these masses respond to their initial “kick” in different ways, you see different proportions of them over time. Try to measure different flavors at the end, and you’ll find different ones depending on when and where you measure. That’s the oscillation effect, and that’s why it means that neutrinos have mass.

It’s a bit more complicated to work out the math behind this, but not unreasonably so: it’s simpler than a lot of other physics calculations. Working through the math, we find that by measuring how long it takes neutrinos to oscillate we can calculate the differences between (squares of) neutrino masses. What we can’t calculate are the masses themselves. We know they’re small: neutrinos travel at almost the speed of light, and our cosmological models of the universe have surprisingly little room for massive neutrinos: too much mass, and our universe would look very different than it does today. But we don’t know much more than that. We don’t even know the order of the masses: you might assume electron neutrinos are on average lighter than muon neutrinos, which are lighter than tau neutrinos…but it could easily be the other way around! We also don’t know whether neutrinos get their mass from the Higgs like other particles do, or if they work in a completely different way.

Unlike other mysteries of physics, we’ll likely have the answer to some of these questions soon. People are already picking through the data from current experiments, seeing if they hint towards one order of masses or the other, or to one or the other way for neutrinos to get their mass. More experiments will start taking data this year, and others are expected to start later this decade. At some point, the textbooks may well have more “normal” mass numbers for each of the neutrinos. But until then, they serve as a nice illustration of what mass actually means in particle physics.

Alice Through the Parity Glass

When you look into your mirror in the morning, the face looking back at you isn’t exactly your own. Your mirror image is flipped: left-handed if you’re right-handed, and right-handed if you’re left-handed. Your body is not symmetric in the mirror: we say it does not respect parity symmetry. Zoom in, and many of the molecules in your body also have a “handedness” to them: biology is not the same when flipped in a mirror.

What about physics? At first, you might expect the laws of physics themselves to respect parity symmetry. Newton’s laws are the same when reflected in a mirror, and so are Maxwell’s. But one part of physics breaks this rule: the weak nuclear force, the force that causes nuclear beta decay. The weak nuclear force interacts differently with “right-handed” and “left-handed” particles (shorthand for particles that spin counterclockwise or clockwise with respect to their motion). This came as a surprise to most physicists, but it was predicted by Tsung-Dao Lee and Chen-Ning Yang and demonstrated in 1956 by Chien-Shiung Wu, known in her day as the “Queen of Nuclear Research”. The world really does look different when flipped in a mirror.

I gave a lecture on the weak force for the pedagogy course I took a few weeks back. One piece of feedback I got was that the topic wasn’t very relatable. People wanted to know why they should care about the handedness of the weak force, they wanted to hear about “real-life” applications. Once scientists learned that the weak force didn’t respect parity, what did that let us do?

Thinking about this, I realized this is actually a pretty tricky story to tell. With enough time and background, I could explain that the “handedness” of the Standard Model is a major constraint on attempts to unify physics, ruling out a lot of the simpler options. That’s hard to fit in a short lecture though, and it still isn’t especially close to “real life”.

Then I realized I don’t need to talk about “real life” to give a “real-life example”. People explaining relativity get away with science fiction scenarios, spaceships on voyages to black holes. The key isn’t to be familiar, just relatable. If I can tell a story (with people in it), then maybe I can make this work.

All I need, then, is a person who cares a lot about the world behind a mirror.

Curiouser and curiouser…

When Alice goes through the looking glass in the novel of that name, she enters a world flipped left-to-right, a world with its parity inverted. Following Alice, we have a natural opportunity to explore such a world. Others have used this to explore parity symmetry in biology: for example, a side-plot in Alan Moore’s League of Extraordinary Gentlemen sees Alice come back flipped, and starve when she can’t process mirror-reversed nutrients. I haven’t seen it explored for physics, though.

In order to make this story work, we have to get Alice to care about the weak nuclear force. The most familiar thing the weak force does is cause beta decay. And the most familiar thing that undergoes beta decay is a banana. Bananas contain radioactive potassium, which can transform to calcium by emitting an electron and an anti-electron-neutrino.

The radioactive potassium from a banana doesn’t stay in the body very long, only a few hours at most. But if Alice was especially paranoid about radioactivity, maybe she would want to avoid eating bananas. (We shouldn’t tell her that other foods contain potassium too.) If so, she might view the looking glass as a golden opportunity, a chance to eat as many bananas as she likes without worrying about radiation.

Does this work?

A first problem: can Alice even eat mirror-reversed bananas? I told you many biological molecules have handedness, which led Alan Moore’s version of Alice to starve. If we assume, unlike Moore, that Alice comes back in her original configuration and survives, we should still ask if she gets any benefit out of the bananas in the looking glass.

Researching this, I found that the main thing that makes bananas taste “banana-ish”, isoamyl acetate, does not have handedness: mirror bananas will still taste like bananas. Fructose, a sugar in bananas, does have handedness however: it isn’t the same when flipped in a mirror. Chatting with a chemist, the impression I got was that this isn’t a total loss: often, flipping a sugar results in another, different sugar. A mirror banana might still taste sweet, but less so. Overall, it may still be worth eating.

The next problem is a tougher one: flipping a potassium atom doesn’t actually make it immune to the weak force. The weak force only interacts with left-handed particles and right-handed antiparticles: in beta decay, it transforms a left-handed down quark to a left-handed up quark, producing a left-handed electron and a right-handed anti-neutrino.

Alice would have been fine if all of the quarks in potassium were left-handed, but they aren’t: an equal amount are right-handed, so the mirror weak force will still act on them, and they will still undergo beta decay. Actually, it’s worse than that: quarks, and massive particles in general, don’t actually have a definite handedness. If you speed up enough to catch up to a quark and pass it, then from your perspective it’s now going in the opposite direction, and its handedness is flipped. The only particles with definite handedness are massless particles: those go at the speed of light, so you can never catch up to them. Another way to think about this is that quarks get their mass from the Higgs field, and this happens because the Higgs lets left- and right-handed quarks interact. What we call the quark’s mass is in some sense just left- and right-handed quarks constantly mixing back and forth.

Alice does have the opportunity to do something interesting here, if she can somehow capture the anti-neutrinos from those bananas. Our world appears to only have left-handed neutrinos and right-handed anti-neutrinos. This seemed reasonable when we thought neutrinos were massless, but now we know neutrinos have a (very small) mass. As a result, the hunt is on for right-handed neutrinos or left-handed anti-neutrinos: if we can measure them, we could fix one of the lingering mysteries of the Standard Model. With this in mind, Alice has the potential to really confuse some particle physicists, giving them some left-handed anti-neutrinos from beyond the looking-glass.

It turns out there’s a problem with even this scheme, though. The problem is a much wider one: the whole story is physically inconsistent.

I’d been acting like Alice can pass back and forth through the mirror, carrying all her particles with her. But what are “her particles”? If she carries a banana through the mirror, you might imagine the quarks in the potassium atoms carry over. But those quarks are constantly exchanging other quarks and gluons, as part of the strong force holding them together. They’re also exchanging photons with electrons via the electromagnetic force, and they’re also exchanging W bosons via beta decay. In quantum field theory, all of this is in some sense happening at once, an infinite sum over all possible exchanges. It doesn’t make sense to just carve out one set of particles and plug them in to different fields somewhere else.

If we actually wanted to describe a mirror like Alice’s looking glass in physics, we’d want to do it consistently. This is similar to how physicists think of time travel: you can’t go back in time and murder your grandparents because your whole path in space-time has to stay consistent. You can only go back and do things you “already did”. We treat space in a similar way to time. A mirror like Alice’s imposes a condition, that fields on one side are equal to their mirror image on the other side. Conditions like these get used in string theory on occasion, and they have broad implications for physics on the whole of space-time, not just near the boundary. The upshot is that a world with a mirror like Alice’s in it would be totally different from a world without the looking glass: the weak force as we know it would not exist.

So unfortunately, I still don’t have a good “real life” story for a class about parity symmetry. It’s fun trying to follow Alice through a parity transformation, but there are a few too many problems for the tale to make any real sense. Feel free to suggest improvements!

Electromagnetism Is the Weirdest Force

For a long time, physicists only knew about two fundamental forces: electromagnetism, and gravity. Physics students follow the same path, studying Newtonian gravity, then E&M, and only later learning about the other fundamental forces. If you’ve just recently heard about the weak nuclear force and the strong nuclear force, it can be tempting to think of them as just slight tweaks on electromagnetism. But while that can be a helpful way to start, in a way it’s precisely backwards. Electromagnetism is simpler than the other forces, that’s true. But because of that simplicity, it’s actually pretty weird as a force.

The weirdness of electromagnetism boils down to one key reason: the electromagnetic field has no charge.

Maybe that sounds weird to you: if you’ve done anything with electromagnetism, you’ve certainly seen charges. But while you’ve calculated the field produced by a charge, the field itself has no charge. You can specify the positions of some electrons and not have to worry that the electric field will introduce new charges you didn’t plan. Mathematically, this means your equations are linear in the field, and thus not all that hard to solve.

The other forces are different. The strong nuclear force has three types of charge, dubbed red, green, and blue. Not just quarks, but the field itself has charges under this system, making the equations that describe it non-linear.

A depiction of a singlet state

Those properties mean that you can’t just think of the strong force as a push or pull between charges, like you could with electromagnetism. The strong force doesn’t just move quarks around, it can change their color, exchanging charge between the quark and the field. That’s one reason why when we’re more careful we refer to it as not the strong force, but the strong interaction.

The weak force also makes more sense when thought of as an interaction. It can change even more properties of particles, turning different flavors of quarks and leptons into each other, resulting in among other phenomena nuclear beta decay. It would be even more like the strong force, but the Higgs field screws that up, stirring together two more fundamental forces and spitting out the weak force and electromagnetism. The result ties them together in weird ways: for example, it means that the weak field can actually have an electric charge.

Interactions like the strong and weak forces are much more “normal” for particle physicists: if you ask us to picture a random fundamental force, chances are it will look like them. It won’t typically look like electromagnetism, the weird “degenerate” case with a field that doesn’t even have a charge. So despite how familiar electromagnetism may be to you, don’t take it as your model of what a fundamental force should look like: of all the forces, it’s the simplest and weirdest.

Theoretical Uncertainty and Uncertain Theory

Yesterday, Fermilab’s Muon g-2 experiment announced a new measurement of the magnetic moment of the muon, a number which describes how muons interact with magnetic fields. For what might seem like a small technical detail, physicists have been very excited about this measurement because it’s a small technical detail that the Standard Model seems to get wrong, making it a potential hint of new undiscovered particles. Quanta magazine has a great piece on the announcement, which explains more than I will here, but the upshot is that there are two different calculations on the market that attempt to predict the magnetic moment of the muon. One of them, using older methods, disagrees with the experiment. The other, with a new approach, agrees. The question then becomes, which calculation was wrong? And why?

What does it mean for a prediction to match an experimental result? The simple, wrong, answer is that the numbers must be equal: if you predict “3”, the experiment has to measure “3”. The reason why this is wrong is that in practice, every experiment and every prediction has some uncertainty. If you’ve taken a college physics class, you’ve run into this kind of uncertainty in one of its simplest forms, measurement uncertainty. Measure with a ruler, and you can only confidently measure down to the smallest divisions on the ruler. If you measure 3cm, but your ruler has ticks only down to a millimeter, then what you’re measuring might be as large as 3.1cm or as small as 2.9 cm. You just don’t know.

This uncertainty doesn’t mean you throw up your hands and give up. Instead, you estimate the effect it can have. You report, not a measurement of 3cm, but of 3cm plus or minus 1mm. If the prediction was 2.9cm, then you’re fine: it falls within your measurement uncertainty.

Measurements aren’t the only thing that can be uncertain. Predictions have uncertainty too, theoretical uncertainty. Sometimes, this comes from uncertainty on a previous measurement: if you make a prediction based on that experiment that measured 3cm plus or minus 1mm, you have to take that plus or minus into account and estimate its effect (we call this propagation of errors). Sometimes, the uncertainty comes instead from an approximation you’re making. In particle physics, we sometimes approximate interactions between different particles with diagrams, beginning with the simplest diagrams and adding on more complicated ones as we go. To estimate the uncertainty there, we estimate the size of the diagrams we left out, the more complicated ones we haven’t calculated yet. Other times, that approximation doesn’t work, and we need to use a different approximation, treating space and time as a finite grid where we can do computer simulations. In that case, you can estimate your uncertainty based on how small you made your grid. The new approach to predicting the muon magnetic moment uses that kind of approximation.

There’s a common thread in all of these uncertainty estimates: you don’t expect to be too far off on average. Your measurements won’t be perfect, but they won’t all be screwed up in the same way either: chances are, they will randomly be a little below or a little above the truth. Your calculations are similar: whether you’re ignoring complicated particle physics diagrams or the spacing in a simulated grid, you can treat the difference as something small and random. That randomness means you can use statistics to talk about your errors: you have statistical uncertainty. When you have statistical uncertainty, you can estimate, not just how far off you might get, but how likely it is you ended up that far off. In particle physics, we have very strict standards for this kind of thing: to call something new a discovery, we demand that it is so unlikely that it would only show up randomly under the old theory roughly one in a million times. The muon magnetic moment isn’t quite up to our standards for a discovery yet, but the new measurement brought it closer.

The two dueling predictions for the muon’s magnetic moment both estimate some amount of statistical uncertainty. It’s possible that the two calculations just disagree due to chance, and that better measurements or a tighter simulation grid would make them agree. Given their estimates, though, that’s unlikely. That takes us from the realm of theoretical uncertainty, and into uncertainty about the theoretical. The two calculations use very different approaches. The new calculation tries to compute things from first principles, using the Standard Model directly. The risk is that such a calculation needs to make assumptions, ignoring some effects that are too difficult to calculate, and one of those assumptions may be wrong. The older calculation is based more on experimental results, using different experiments to estimate effects that are hard to calculate but that should be similar between different situations. The risk is that the situations may be less similar than expected, their assumptions breaking down in a way that the bottom-up calculation could catch.

None of these risks are easy to estimate. They’re “unknown unknowns”, or rather, “uncertain uncertainties”. And until some of them are resolved, it won’t be clear whether Fermilab’s new measurement is a sign of undiscovered particles, or just a (challenging!) confirmation of the Standard Model.

Redefining Fields for Fun and Profit

When we study subatomic particles, particle physicists use a theory called Quantum Field Theory. But what is a quantum field?

Some people will describe a field in vague terms, and say it’s like a fluid that fills all of space, or a vibrating rubber sheet. These are all metaphors, and while they can be helpful, they can also be confusing. So let me avoid metaphors, and say something that may be just as confusing: a field is the answer to a question.

Suppose you’re interested in a particle, like an electron. There is an electron field that tells you, at each point, your chance of detecting one of those particles spinning in a particular way. Suppose you’re trying to measure a force, say electricity or magnetism. There is an electromagnetic field that tells you, at each point, what force you will measure.

Sometimes the question you’re asking has a very simple answer: just a single number, for each point and each time. An example of a question like that is the temperature: pick a city, pick a date, and the temperature there and then is just a number. In particle physics, the Higgs field answers a question like that: at each point, and each time, how “Higgs-y” is it there and then? You might have heard that the Higgs field gives other particles their mass: what this means is that the more “Higgs-y” it is somewhere, the higher these particles’ mass will be. The Higgs field is almost constant, because it’s very difficult to get it to change. That’s in some sense what the Large Hadron Collider did when they discovered the Higgs boson: pushed hard enough to cause a tiny, short-lived ripple in the Higgs field, a small area that was briefly more “Higgs-y” than average.

We like to think of some fields as fundamental, and others as composite. A proton is composite: it’s made up of quarks and gluons. Quarks and gluons, as far as we know, are fundamental: they’re not made up of anything else. More generally, since we’re thinking about fields as answers to questions, we can just as well ask more complicated, “composite” questions. For example, instead of “what is the temperature?”, we can ask “what is the temperature squared?” or “what is the temperature times the Higgs-y-ness?”.

But this raises a troubling point. When we single out a specific field, like the Higgs field, why are we sure that that field is the fundamental one? Why didn’t we start with “Higgs squared” instead? Or “Higgs plus Higgs squared”? Or something even weirder?

The inventor of the Higgs-squared field, Peter Higgs-squared

That kind of swap, from Higgs to Higgs squared, is called a field redefinition. In the math of quantum field theory, it’s something you’re perfectly allowed to do. Sometimes, it’s even a good idea. Other times, it can make your life quite complicated.

The reason why is that some fields are much simpler than others. Some are what we call free fields. Free fields don’t interact with anything else. They just move, rippling along in easy-to-calculate waves.

Redefine a free field, swapping it for some more complicated function, and you can easily screw up, and make it into an interacting field. An interacting field might interact with another field, like how electromagnetic fields move (and are moved by) electrons. It might also just interact with itself, a kind of feedback effect that makes any calculation we’d like to do much more difficult.

If we persevere with this perverse choice, and do the calculation anyway, we find a surprise. The final results we calculate, the real measurements people can do, are the same in both theories. The field redefinition changed how the theory appeared, quite dramatically…but it didn’t change the physics.

You might think the moral of the story is that you must always choose the right fundamental field. You might want to, but you can’t: not every field is secretly free. Some will be interacting fields, whatever you do. In that case, you can make one choice or another to simplify your life…but you can also just refuse to make a choice.

That’s something quite a few physicists do. Instead of looking at a theory and calling some fields fundamental and others composite, they treat every one of these fields, every different question they could ask, on the same footing. They then ask, for these fields, what one can measure about them. They can ask which fields travel at the speed of light, and which ones go slower, or which fields interact with which other fields, and how much. Field redefinitions will shuffle the fields around, but the patterns in the measurements will remain. So those, and not the fields, can be used to specify the theory. Instead of describing the world in terms of a few fundamental fields, they think about the world as a kind of field soup, characterized by how it shifts when you stir it with a spoon.

It’s not a perspective everyone takes. If you overhear physicists, sometimes they will talk about a theory with only a few fields, sometimes they will talk about many, and you might be hard-pressed to tell what they’re talking about. But if you keep in mind these two perspectives: either a few fundamental fields, or a “field soup”, you’ll understand them a little better.

Reality as an Algebra of Observables

Listen to a physicist talk about quantum mechanics, and you’ll hear the word “observable”. Observables are, intuitively enough, things that can be observed. They’re properties that, in principle, one could measure in an experiment, like the position of a particle or its momentum. They’re the kinds of things linked by uncertainty principles, where the better you know one, the worse you know the other.

Some physicists get frustrated by this focus on measurements alone. They think we ought to treat quantum mechanics, not like a black box that produces results, but as information about some underlying reality. Instead of just observables, they want us to look for “beables“: not just things that can be observed, but things that something can be. From their perspective, the way other physicists focus on observables feels like giving up, like those physicists are abandoning their sacred duty to understand the world. Others, like the Quantum Bayesians or QBists, disagree, arguing that quantum mechanics really is, and ought to be, a theory of how individuals get evidence about the world.

I’m not really going to weigh in on that debate, I still don’t feel like I know enough to even write a decent summary. But I do think that one of the instincts on the “beables” side is wrong. If we focus on observables in quantum mechanics, I don’t think we’re doing anything all that unusual. Even in other parts of physics, we can think about reality purely in terms of observations. Doing so isn’t a dereliction of duty: often, it’s the most useful way to understand the world.

When we try to comprehend the world, we always start alone. From our time in the womb, we have only our senses and emotions to go on. With a combination of instinct and inference we start assembling a consistent picture of reality. Philosophers called phenomenologists (not to be confused with the physicists called phenomenologists) study this process in detail, trying to characterize how different things present themselves to an individual consciousness.

For my point here, these details don’t matter so much. That’s because in practice, we aren’t alone in understanding the world. Based on what others say about the world, we conclude they perceive much like we do, and we learn by their observations just as we learn by our own. We can make things abstract: instead of the specifics of how individuals perceive, we think about groups of scientists making measurements. At the end of this train lie observables: things that we as a community could in principle learn, and share with each other, ignoring the details of how exactly we measure them.

If each of these observables was unrelated, just scattered points of data, then we couldn’t learn much. Luckily, they are related. In quantum mechanics, some of these relationships are the uncertainty principles I mentioned earlier. Others relate measurements at different places, or at different times. The fancy way to refer to all these relationships is as an algebra: loosely, it’s something you can “do algebra with”, like you did with numbers and variables in high school. When physicists and mathematicians want to do quantum mechanics or quantum field theory seriously, they often talk about an “algebra of observables”, a formal way of thinking about all of these relationships.

Focusing on those two things, observables and how they are related, isn’t just useful in the quantum world. It’s an important way to think in other areas of physics too. If you’ve heard people talk about relativity, the focus on measurement screams out, in thought experiments full of abstract clocks and abstract yardsticks. Without this discipline, you find paradoxes, only to resolve them when you carefully track what each person can observe. More recently, physicists in my field have had success computing the chance particles collide by focusing on the end result, the actual measurements people can make, ignoring what might happen in between to cause that measurement. We can then break measurements down into simpler measurements, or use the structure of simpler measurements to guess more complicated ones. While we typically have done this in quantum theories, that’s not really a limitation: the same techniques make sense for problems in classical physics, like computing the gravitational waves emitted by colliding black holes.

With this in mind, we really can think of reality in those terms: not as a set of beable objects, but as a set of observable facts, linked together in an algebra of observables. Paring things down to what we can know in this way is more honest, and it’s also more powerful and useful. Far from a betrayal of physics, it’s the best advantage we physicists have in our quest to understand the world.

A Tale of Two Donuts

I’ve got a new paper up this week, with Hjalte Frellesvig, Cristian Vergu, and Matthias Volk, about the elliptic integrals that show up in Feynman diagrams.

You can think of elliptic integrals as integrals over a torus, a curve shaped like the outer crust of a donut.

Do you prefer your integrals glazed, or with powdered sugar?

Integrals like these are showing up more and more in our field, the subject of bigger and bigger conferences. By now, we think we have a pretty good idea of how to handle them, but there are still some outstanding mysteries to solve.

One such mystery came up in a paper in 2017, by Luise Adams and Stefan Weinzierl. They were working with one of the favorite examples of this community, the so-called sunrise diagram (sunrise being a good time to eat donuts). And they noticed something surprising: if they looked at the sunrise diagram in different ways, it was described by different donuts.

What do I mean, different donuts?

The integrals we know best in this field aren’t integrals on a torus, but rather integrals on a sphere. In some sense, all spheres are the same: you can make them bigger or smaller, but they don’t have different shapes, they’re all “sphere-shaped”. In contrast, integrals on a torus are trickier, because toruses can have different shapes. Think about different donuts: some might have a thin ring, others a thicker one, even if the overall donut is the same size. You can’t just scale up one donut and get the other.

This donut even has a marked point

My colleague, Cristian Vergu, was annoyed by this. He’s the kind of person who trusts mathematics like an old friend, one who would never lead him astray. He thought that there must be one answer, one correct donut, one natural way to represent the sunrise diagram mathematically. I was skeptical, I don’t trust mathematics nearly as much as Cristian does. To sort it out, we brought in Hjalte Frellesvig and Matthias Volk, and started trying to write the sunrise diagram every way we possibly could. (Along the way, we threw in another “donut diagram”, the double-box, just to see what would happen.)

Rather than getting a zoo of different donuts, we got a surprise: we kept seeing the same two. And in the end, we stumbled upon the answer Cristian was hoping for: one of these two is, in a meaningful sense, the “correct donut”.

What was wrong with the other donut? It turns out when the original two donuts were found, one of them involved a move that is a bit risky mathematically, namely, combining square roots.

For readers who don’t know what I mean, or why this is risky, let me give a simple example. Everyone else can skip to after the torus gif.

Suppose I am solving a problem, and I find a product of two square roots:

\sqrt{x}\sqrt{x}

I could try combining them under the same square root sign, like so:

\sqrt{x^2}

That works, if x is positive. But now suppose x=-1. Plug in negative one to the first expression, and you get,

\sqrt{-1}\sqrt{-1}=i\times i=-1

while in the second,

\sqrt{(-1)^2}=\sqrt{1}=1

Torus transforming, please stand by

In this case, it wasn’t as obvious that combining roots would change the donut. It might have been perfectly safe. It took some work to show that indeed, this was the root of the problem. If the roots are instead combined more carefully, then one of the donuts goes away, leaving only the one, true donut.

I’m interested in seeing where this goes, how many different donuts we have to understand and how they might be related. But I’ve also been writing about donuts for the last hour or so, so I’m getting hungry. See you next week!

Physical Intuition From Physics Experience

One of the most mysterious powers physicists claim is physical intuition. Let the mathematicians have their rigorous proofs and careful calculations. We just need to ask ourselves, “Does this make sense physically?”

It’s tempting to chalk this up to bluster, or physicist arrogance. Sometimes, though, a physicist manages to figure out something that stumps the mathematicians. Edward Witten’s work on knot theory is a classic example, where he used ideas from physics, not rigorous proof, to win one of mathematics’ highest honors.

So what is physical intuition? And what is its relationship to proof?

Let me walk you through an example. I recently saw a talk by someone in my field who might be a master of physical intuition. He was trying to learn about what we call Effective Field Theories, theories that are “effectively” true at some energy but don’t include the details of higher-energy particles. He calculated that there are limits to the effect these higher-energy particles can have, just based on simple cause and effect. To explain the calculation to us, he gave a physical example, of coupled oscillators.

Oscillators are familiar problems for first-year physics students. Objects that go back and forth, like springs and pendulums, tend to obey similar equations. Link two of them together (couple them), and the equations get more complicated, work for a second-year student instead of a first-year one. Such a student will notice that coupled oscillators “repel” each other: their frequencies get father apart than they would be if they weren’t coupled.

Our seminar speaker wanted us to revisit those second-year-student days, in order to understand how different particles behave in Effective Field Theory. Just as the frequencies of the oscillators repel each other, the energies of particles repel each other: the unknown high-energy particles could only push the energies of the lighter particles we can detect lower, not higher.

This is an example of physical intuition. Examine it, and you can learn a few things about how physical intuition works.

First, physical intuition comes from experience. Using physical intuition wasn’t just a matter of imagining the particles and trying to see what “makes sense”. Instead, it required thinking about similar problems from our experience as physicists: problems that don’t just seem similar on the surface, but are mathematically similar.

Second, physical intuition doesn’t replace calculation. Our speaker had done the math, he hadn’t just made a physical argument. Instead, physical intuition serves two roles: to inspire, and to help remember. Physical intuition can inspire new solutions, suggesting ideas that you go on to check with calculation. In addition to that, it can help your mind sort out what you already know. Without the physical story, we might not have remembered that the low-energy particles have their energies pushed down. With the story though, we had a similar problem to compare, and it made the whole thing more memorable. Human minds aren’t good at holding a giant pile of facts. What they are good at is holding narratives. “Physical intuition” ties what we know into a narrative, building on past problems to understand new ones.

Finally, physical intuition can be risky. If the problem is too different then the intuition can lead you astray. The mathematics of coupled oscillators and Effective Field Theories was similar enough for this argument to work, but if it turned out to be different in an important way then the intuition would have backfired, making it harder to find the answer and harder to keep track once it was found.

Physical intuition may seem mysterious. But deep down, it’s just physicists using our experience, comparing similar problems to help keep track of what we need to know. I’m sure chemists, biologists, and mathematicians all have similar stories to tell.