# Doing Difficult Things Is Its Own Reward

Does antimatter fall up, or down?

Technically, we don’t know yet. The ALPHA-g experiment would have been the first to check this, making anti-hydrogen by trapping anti-protons and positrons in a long tube and seeing which way it falls. While they got most of their setup working, the LHC complex shut down before they could finish. It starts up again next month, so we should have our answer soon.

That said, for most theorists’ purposes, we absolutely do know: antimatter falls down. Antimatter is one of the cleanest examples of a prediction from pure theory that was confirmed by experiment. When Paul Dirac first tried to write down an equation that described electrons, he found the math forced him to add another particle with the opposite charge. With no such particle in sight, he speculated it could be the proton (this doesn’t work, they need the same mass), before Carl D. Anderson discovered the positron in 1932.

The same math that forced Dirac to add antimatter also tells us which way it falls. There’s a bit more involved, in the form of general relativity, but the recipe is pretty simple: we know how to take an equation like Dirac’s and add gravity to it, and we have enough practice doing it in different situations that we’re pretty sure it’s the right way to go. Pretty sure doesn’t mean 100% sure: talk to the right theorists, and you’ll probably find a proposal or two in which antimatter falls up instead of down. But they tend to be pretty weird proposals, from pretty weird theorists.

Ok, but if those theorists are that “weird”, that outside the mainstream, why does an experiment like ALPHA-g exist? Why does it happen at CERN, one of the flagship facilities for all of mainstream particle physics?

This gets at a misconception I occasionally hear from critics of the physics mainstream. They worry about groupthink among mainstream theorists, the physics community dismissing good ideas just because they’re not trendy (you may think I did that just now, for antigravity antimatter!) They expect this to result in a self-fulfilling prophecy where nobody tests ideas outside the mainstream, so they find no evidence for them, so they keep dismissing them.

The mistake of these critics is in assuming that what gets tested has anything to do with what theorists think is reasonable.

Theorists talk to experimentalists, sure. We motivate them, give them ideas and justification. But ultimately, people do experiments because they can do experiments. I watched a talk about the ALPHA experiment recently, and one thing that struck me was how so many different techniques play into it. They make antiprotons using a proton beam from the accelerator, slow them down with magnetic fields, and cool them with lasers. They trap their antihydrogen in an extremely precise vacuum, and confirm it’s there with particle detectors. The whole setup is a blend of cutting-edge accelerator physics and cutting-edge tricks for manipulating atoms. At its heart, ALPHA-g feels like its primary goal is to stress-test all of those tricks: to push the state of the art in a dozen experimental techniques in order to accomplish something remarkable.

And so even if the mainstream theorists don’t care, ALPHA will keep going. It will keep getting funding, it will keep getting visited by celebrities and inspiring pop fiction. Because enough people recognize that doing something difficult can be its own reward.

In my experience, this motivation applies to theorists too. Plenty of us will dismiss this or that proposal as unlikely or impossible. But give us a concrete calculation, something that lets us use one of our flashy theoretical techniques, and the tune changes. If we’re getting the chance to develop our tools, and get a paper out of it in the process, then sure, we’ll check your wacky claim. Why not?

I suspect critics of the mainstream would have a lot more success with this kind of pitch-based approach. If you can find a theorist who already has the right method, who’s developing and extending it and looking for interesting applications, then make your pitch: tell them how they can answer your question just by doing what they do best. They’ll think of it as a chance to disprove you, and you should let them, that’s the right attitude to take as a scientist anyway. It’ll work a lot better than accusing them of hogging the grant money.

# Redefining Fields for Fun and Profit

When we study subatomic particles, particle physicists use a theory called Quantum Field Theory. But what is a quantum field?

Some people will describe a field in vague terms, and say it’s like a fluid that fills all of space, or a vibrating rubber sheet. These are all metaphors, and while they can be helpful, they can also be confusing. So let me avoid metaphors, and say something that may be just as confusing: a field is the answer to a question.

Suppose you’re interested in a particle, like an electron. There is an electron field that tells you, at each point, your chance of detecting one of those particles spinning in a particular way. Suppose you’re trying to measure a force, say electricity or magnetism. There is an electromagnetic field that tells you, at each point, what force you will measure.

Sometimes the question you’re asking has a very simple answer: just a single number, for each point and each time. An example of a question like that is the temperature: pick a city, pick a date, and the temperature there and then is just a number. In particle physics, the Higgs field answers a question like that: at each point, and each time, how “Higgs-y” is it there and then? You might have heard that the Higgs field gives other particles their mass: what this means is that the more “Higgs-y” it is somewhere, the higher these particles’ mass will be. The Higgs field is almost constant, because it’s very difficult to get it to change. That’s in some sense what the Large Hadron Collider did when they discovered the Higgs boson: pushed hard enough to cause a tiny, short-lived ripple in the Higgs field, a small area that was briefly more “Higgs-y” than average.

We like to think of some fields as fundamental, and others as composite. A proton is composite: it’s made up of quarks and gluons. Quarks and gluons, as far as we know, are fundamental: they’re not made up of anything else. More generally, since we’re thinking about fields as answers to questions, we can just as well ask more complicated, “composite” questions. For example, instead of “what is the temperature?”, we can ask “what is the temperature squared?” or “what is the temperature times the Higgs-y-ness?”.

But this raises a troubling point. When we single out a specific field, like the Higgs field, why are we sure that that field is the fundamental one? Why didn’t we start with “Higgs squared” instead? Or “Higgs plus Higgs squared”? Or something even weirder?

That kind of swap, from Higgs to Higgs squared, is called a field redefinition. In the math of quantum field theory, it’s something you’re perfectly allowed to do. Sometimes, it’s even a good idea. Other times, it can make your life quite complicated.

The reason why is that some fields are much simpler than others. Some are what we call free fields. Free fields don’t interact with anything else. They just move, rippling along in easy-to-calculate waves.

Redefine a free field, swapping it for some more complicated function, and you can easily screw up, and make it into an interacting field. An interacting field might interact with another field, like how electromagnetic fields move (and are moved by) electrons. It might also just interact with itself, a kind of feedback effect that makes any calculation we’d like to do much more difficult.

If we persevere with this perverse choice, and do the calculation anyway, we find a surprise. The final results we calculate, the real measurements people can do, are the same in both theories. The field redefinition changed how the theory appeared, quite dramatically…but it didn’t change the physics.

You might think the moral of the story is that you must always choose the right fundamental field. You might want to, but you can’t: not every field is secretly free. Some will be interacting fields, whatever you do. In that case, you can make one choice or another to simplify your life…but you can also just refuse to make a choice.

That’s something quite a few physicists do. Instead of looking at a theory and calling some fields fundamental and others composite, they treat every one of these fields, every different question they could ask, on the same footing. They then ask, for these fields, what one can measure about them. They can ask which fields travel at the speed of light, and which ones go slower, or which fields interact with which other fields, and how much. Field redefinitions will shuffle the fields around, but the patterns in the measurements will remain. So those, and not the fields, can be used to specify the theory. Instead of describing the world in terms of a few fundamental fields, they think about the world as a kind of field soup, characterized by how it shifts when you stir it with a spoon.

It’s not a perspective everyone takes. If you overhear physicists, sometimes they will talk about a theory with only a few fields, sometimes they will talk about many, and you might be hard-pressed to tell what they’re talking about. But if you keep in mind these two perspectives: either a few fundamental fields, or a “field soup”, you’ll understand them a little better.

# A Tale of Two Donuts

I’ve got a new paper up this week, with Hjalte Frellesvig, Cristian Vergu, and Matthias Volk, about the elliptic integrals that show up in Feynman diagrams.

You can think of elliptic integrals as integrals over a torus, a curve shaped like the outer crust of a donut.

Integrals like these are showing up more and more in our field, the subject of bigger and bigger conferences. By now, we think we have a pretty good idea of how to handle them, but there are still some outstanding mysteries to solve.

One such mystery came up in a paper in 2017, by Luise Adams and Stefan Weinzierl. They were working with one of the favorite examples of this community, the so-called sunrise diagram (sunrise being a good time to eat donuts). And they noticed something surprising: if they looked at the sunrise diagram in different ways, it was described by different donuts.

What do I mean, different donuts?

The integrals we know best in this field aren’t integrals on a torus, but rather integrals on a sphere. In some sense, all spheres are the same: you can make them bigger or smaller, but they don’t have different shapes, they’re all “sphere-shaped”. In contrast, integrals on a torus are trickier, because toruses can have different shapes. Think about different donuts: some might have a thin ring, others a thicker one, even if the overall donut is the same size. You can’t just scale up one donut and get the other.

My colleague, Cristian Vergu, was annoyed by this. He’s the kind of person who trusts mathematics like an old friend, one who would never lead him astray. He thought that there must be one answer, one correct donut, one natural way to represent the sunrise diagram mathematically. I was skeptical, I don’t trust mathematics nearly as much as Cristian does. To sort it out, we brought in Hjalte Frellesvig and Matthias Volk, and started trying to write the sunrise diagram every way we possibly could. (Along the way, we threw in another “donut diagram”, the double-box, just to see what would happen.)

Rather than getting a zoo of different donuts, we got a surprise: we kept seeing the same two. And in the end, we stumbled upon the answer Cristian was hoping for: one of these two is, in a meaningful sense, the “correct donut”.

What was wrong with the other donut? It turns out when the original two donuts were found, one of them involved a move that is a bit risky mathematically, namely, combining square roots.

For readers who don’t know what I mean, or why this is risky, let me give a simple example. Everyone else can skip to after the torus gif.

Suppose I am solving a problem, and I find a product of two square roots:

$\sqrt{x}\sqrt{x}$

I could try combining them under the same square root sign, like so:

$\sqrt{x^2}$

That works, if $x$ is positive. But now suppose $x=-1$. Plug in negative one to the first expression, and you get,

$\sqrt{-1}\sqrt{-1}=i\times i=-1$

while in the second,

$\sqrt{(-1)^2}=\sqrt{1}=1$

In this case, it wasn’t as obvious that combining roots would change the donut. It might have been perfectly safe. It took some work to show that indeed, this was the root of the problem. If the roots are instead combined more carefully, then one of the donuts goes away, leaving only the one, true donut.

I’m interested in seeing where this goes, how many different donuts we have to understand and how they might be related. But I’ve also been writing about donuts for the last hour or so, so I’m getting hungry. See you next week!

# This Week, at Scattering-Amplitudes.com

I did a guest post this week, on an outreach site for the Max Planck Institute for Physics. The new Director of their Quantum Field Theory Department, Johannes Henn, has been behind a lot of major developments in scattering amplitudes. He was one of the first to notice just how symmetric N=4 super Yang-Mills is, as well as the first to build the “hexagon functions” that would become my stock-in-trade. He’s also done what we all strive to do, and applied what he learned to the real world, coming up with an approach to differential equations that has become the gold standard for many different amplitudes calculations.

Now in his new position, he has a swanky new outreach site, reached at the conveniently memorable scattering-amplitudes.com and managed by outreach-ologist Sorana Scholtes. They started a fun series recently called “Talking Terms” as a kind of glossary, explaining words that physicists use over and over again. My guest post for them is part of that series. It hearkens all the way back to one of my first posts, defining what “theory” means to a theoretical physicist. It covers something new as well, a phrase I don’t think I’ve ever explained on this blog: “working in a theory”. You can check it out on their site!

# Physical Intuition From Physics Experience

One of the most mysterious powers physicists claim is physical intuition. Let the mathematicians have their rigorous proofs and careful calculations. We just need to ask ourselves, “Does this make sense physically?”

It’s tempting to chalk this up to bluster, or physicist arrogance. Sometimes, though, a physicist manages to figure out something that stumps the mathematicians. Edward Witten’s work on knot theory is a classic example, where he used ideas from physics, not rigorous proof, to win one of mathematics’ highest honors.

So what is physical intuition? And what is its relationship to proof?

Let me walk you through an example. I recently saw a talk by someone in my field who might be a master of physical intuition. He was trying to learn about what we call Effective Field Theories, theories that are “effectively” true at some energy but don’t include the details of higher-energy particles. He calculated that there are limits to the effect these higher-energy particles can have, just based on simple cause and effect. To explain the calculation to us, he gave a physical example, of coupled oscillators.

Oscillators are familiar problems for first-year physics students. Objects that go back and forth, like springs and pendulums, tend to obey similar equations. Link two of them together (couple them), and the equations get more complicated, work for a second-year student instead of a first-year one. Such a student will notice that coupled oscillators “repel” each other: their frequencies get father apart than they would be if they weren’t coupled.

Our seminar speaker wanted us to revisit those second-year-student days, in order to understand how different particles behave in Effective Field Theory. Just as the frequencies of the oscillators repel each other, the energies of particles repel each other: the unknown high-energy particles could only push the energies of the lighter particles we can detect lower, not higher.

This is an example of physical intuition. Examine it, and you can learn a few things about how physical intuition works.

First, physical intuition comes from experience. Using physical intuition wasn’t just a matter of imagining the particles and trying to see what “makes sense”. Instead, it required thinking about similar problems from our experience as physicists: problems that don’t just seem similar on the surface, but are mathematically similar.

Finally, physical intuition can be risky. If the problem is too different then the intuition can lead you astray. The mathematics of coupled oscillators and Effective Field Theories was similar enough for this argument to work, but if it turned out to be different in an important way then the intuition would have backfired, making it harder to find the answer and harder to keep track once it was found.

Physical intuition may seem mysterious. But deep down, it’s just physicists using our experience, comparing similar problems to help keep track of what we need to know. I’m sure chemists, biologists, and mathematicians all have similar stories to tell.

# A Physicist New Year

Happy New Year to all!

Physicists celebrate the new year by trying to sneak one last paper in before the year is over. Looking at Facebook last night I saw three different friends preview the papers they just submitted. The site where these papers appear, arXiv, had seventy new papers this morning, just in the category of theoretical high-energy physics. Of those, nine of them were in my, or a closely related subfield.

I’d love to tell you all about these papers (some exciting! some long-awaited!), but I’m still tired from last night and haven’t read them yet. So I’ll just close by wishing you all, once again, a happy new year.

# QCD Meets Gravity 2020, Retrospective

I was at a Zoomference last week, called QCD Meets Gravity, about the many ways gravity can be thought of as the “square” of other fundamental forces. I didn’t have time to write much about the actual content of the conference, so I figured I’d say a bit more this week.

A big theme of this conference, as in the past few years, was gravitational waves. From LIGO’s first announcement of a successful detection, amplitudeologists have been developing new methods to make predictions for gravitational waves more efficient. It’s a field I’ve dabbled in a bit myself. Last year’s QCD Meets Gravity left me impressed by how much progress had been made, with amplitudeologists already solidly part of the conversation and able to produce competitive results. This year felt like another milestone, in that the amplitudeologists weren’t just catching up with other gravitational wave researchers on the same kinds of problems. Instead, they found new questions that amplitudes are especially well-suited to answer. These included combining two pieces of these calculations (“potential” and “radiation”) that the older community typically has to calculate separately, using an old quantum field theory trick, finding the gravitational wave directly from amplitudes, and finding a few nice calculations that can be used to “generate” the rest.

A large chunk of the talks focused on different “squaring” tricks (or as we actually call them, double-copies). There were double-copies for cosmology and conformal field theory, for the celestial sphere, and even some version of M theory. There were new perspectives on the double-copy, new building blocks and algebraic structures that lie behind it. There were talks on the so-called classical double-copy for space-times, where there have been some strange discoveries (an extra dimension made an appearance) but also a more rigorous picture of where the whole thing comes from, using twistor space. There were not one, but two talks linking the double-copy to the Navier-Stokes equation describing fluids, from two different groups. (I’m really curious whether these perspectives are actually useful for practical calculations about fluids, or just fun to think about.) Finally, while there wasn’t a talk scheduled on this paper, the authors were roped in by popular demand to talk about their work. They claim to have made progress on a longstanding puzzle, how to show that double-copy works at the level of the Lagrangian, and the community was eager to dig into the details.

I’ve probably left things out here, it was a packed conference! It’s been really fun seeing what the community has cooked up, and I can’t wait to see what happens next.

# QCD Meets Gravity 2020

I’m at another Zoom conference this week, QCD Meets Gravity. This year it’s hosted by Northwestern.

QCD Meets Gravity is a conference series focused on the often-surprising links between quantum chromodynamics on the one hand and gravity on the other. By thinking of gravity as the “square” of forces like the strong nuclear force, researchers have unlocked new calculation techniques and deep insights.

Last year’s conference was very focused on one particular topic, trying to predict the gravitational waves observed by LIGO and VIRGO. That’s still a core topic of the conference, but it feels like there is a bit more diversity in topics this year. We’ve seen a variety of talks on different “squares”: new theories that square to other theories, and new calculations that benefit from “squaring” (even surprising applications to the Navier-Stokes equation!) There are talks on subjects from String Theory to Effective Field Theory, and even a talk on a very different way that “QCD meets gravity”, in collisions of neutron stars.

With still a few more talks to go, expect me to say a bit more next week, probably discussing a few in more detail. (Several people presented exciting work in progress!) Until then, I should get back to watching!

# What You Don’t Know, You Can Parametrize

In physics, what you don’t know can absolutely hurt you. If you ignore that planets have their own gravity, or that metals conduct electricity, you’re going to calculate a lot of nonsense. At the same time, as physicists we can’t possibly know everything. Our experiments are never perfect, our math never includes all the details, and even our famous Standard Model is almost certainly not the whole story. Luckily, we have another option: instead of ignoring what we don’t know, we can parametrize it, and estimate its effect.

Estimating the unknown is something we physicists have done since Newton. You might think Newton’s big discovery was the inverse-square law for gravity, but others at the time, like Robert Hooke, had also been thinking along those lines. Newton’s big discovery was that gravity was universal: that you need to know the effect of gravity, not just from the sun, but from all the other planets as well. The trouble was, Newton didn’t know how to calculate the motion of all of the planets at once (in hindsight, we know he couldn’t have). Instead, he estimated, using what he knew to guess how big the effect of what he didn’t would be. It was the accuracy of those guesses, not just the inverse square law by itself, that convinced the world that Newton was right.

If you’ve studied electricity and magnetism, you get to the point where you can do simple calculations with a few charges in your sleep. The world doesn’t have just a few charges, though: it has many charges, protons and electrons in every atom of every object. If you had to keep all of them in your calculations you’d never pass freshman physics, but luckily you can once again parametrize what you don’t know. Often you can hide those charges away, summarizing their effects with just a few numbers. Other times, you can treat materials as boundaries, and summarize everything beyond in terms of what happens on the edge. The equations of the theory let you do this, but this isn’t true for every theory: for the Navier-Stokes equation, which we use to describe fluids, it still isn’t known whether you can do this kind of trick.

Parametrizing what we don’t know isn’t just a trick for college physics, it’s key to the cutting edge as well. Right now we have a picture for how all of particle physics works, called the Standard Model, but we know that picture is incomplete. There are a million different theories you could write to go beyond the Standard Model, with a million different implications. Instead of having to use all those theories, physicists can summarize them all with what we call an effective theory: one that keeps track of the effect of all that new physics on the particles we already know. By summarizing those effects with a few parameters, we can see what they would have to be to be compatible with experimental results, ruling out some possibilities and suggesting others.

In a world where we never know everything, there’s always something that can hurt us. But if we’re careful and estimate what we don’t know, if we write down numbers and parameters and keep our options open, we can keep from getting burned. By focusing on what we do know, we can still manage to understand the world.

# Which Things Exist in Quantum Field Theory

If you ever think metaphysics is easy, learn a little quantum field theory.

Someone asked me recently about virtual particles. When talking to the public, physicists sometimes explain the behavior of quantum fields with what they call “virtual particles”. They’ll describe forces coming from virtual particles going back and forth, or a bubbling sea of virtual particles and anti-particles popping out of empty space.

The thing is, this is a metaphor. What’s more, it’s a metaphor for an approximation. As physicists, when we draw diagrams with more and more virtual particles, we’re trying to use something we know how to calculate with (particles) to understand something tougher to handle (interacting quantum fields). Virtual particles, at least as you’re probably picturing them, don’t really exist.

I don’t really blame physicists for talking like that, though. Virtual particles are a metaphor, sure, a way to talk about a particular calculation. But so is basically anything we can say about quantum field theory. In quantum field theory, it’s pretty tough to say which things “really exist”.

You might have heard that there are three types of neutrinos, corresponding to the three “generations” of the Standard Model: electron-neutrinos, muon-neutrinos, and tau-neutrinos. Each is produced in particular kinds of reactions: electron-neutrinos, for example, get produced by beta-plus decay, when a proton turns into a neutron, an anti-electron, and an electron-neutrino.

Leave these neutrinos alone though, and something strange happens. Detect what you expect to be an electron-neutrino, and it might have changed into a muon-neutrino or a tau-neutrino. The neutrino oscillated.

Why does this happen?

One way to explain it is to say that electron-neutrinos, muon-neutrinos, and tau-neutrinos don’t “really exist”. Instead, what really exists are neutrinos with specific masses. These don’t have catchy names, so let’s just call them neutrino-one, neutrino-two, and neutrino-three. What we think of as electron-neutrinos, muon-neutrinos, and tau-neutrinos are each some mix (a quantum superposition) of these “really existing” neutrinos, specifically the mixes that interact nicely with electrons, muons, and tau leptons respectively. When you let them travel, it’s these neutrinos that do the traveling, and due to quantum effects that I’m not explaining here you end up with a different mix than you started with.

This probably seems like a perfectly reasonable explanation. But it shouldn’t. Because if you take one of these mass-neutrinos, and interact with an electron, or a muon, or a tau, then suddenly it behaves like a mix of the old electron-neutrinos, muon-neutrinos, and tau-neutrinos.

That’s because both explanations are trying to chop the world up in a way that can’t be done consistently. There aren’t electron-neutrinos, muon-neutrinos, and tau-neutrinos, and there aren’t neutrino-ones, neutrino-twos, and neutrino-threes. There’s a mathematical object (a vector space) that can look like either.

Whether you’re comfortable with that depends on whether you think of mathematical objects as “things that exist”. If you aren’t, you’re going to have trouble thinking about the quantum world. Maybe you want to take a step back, and say that at least “fields” should exist. But that still won’t do: we can redefine fields, add them together or even use more complicated functions, and still get the same physics. The kinds of things that exist can’t be like this. Instead you end up invoking another kind of mathematical object, equivalence classes.

If you want to be totally rigorous, you have to go a step further. You end up thinking of physics in a very bare-bones way, as the set of all observations you could perform. Instead of describing the world in terms of “these things” or “those things”, the world is a black box, and all you’re doing is finding patterns in that black box.

Is there a way around this? Maybe. But it requires thought, and serious philosophy. It’s not intuitive, it’s not easy, and it doesn’t lend itself well to 3d animations in documentaries. So in practice, whenever anyone tells you about something in physics, you can be pretty sure it’s a metaphor. Nice describable, non-mathematical things typically don’t exist.