I’m in Uppsala in Sweden this week, at an actual in-person conference.
With actual blackboards!
Elliptics started out as a series of small meetings of physicists trying to understand how to make sense of elliptic integrals in calculations of colliding particles. It grew into a full-fledged yearly conference series. I organized last year, which naturally was an online conference. This year though, the stage was set for Uppsala University to host in person.
I should say mostly in person. It’s a hybrid conference, with some speakers and attendees joining on Zoom. Some couldn’t make it because of travel restrictions, or just wanted to be cautious about COVID. But seemingly just as many had other reasons, like teaching schedules or just long distances, that kept them from coming in person. We’re all wondering if this will become a long-term trend, where the flexibility of hybrid conferences lets people attend no matter their constraints.
The hybrid format worked better than expected, but there were still a few kinks. The audio was particularly tricky, it seemed like each day the organizers needed a new microphone setup to take questions. It’s always a little harder to understand someone on Zoom, especially when you’re sitting in an auditorium rather than focused on your own screen. Still, technological experience should make this work better in future.
Content-wise, the conference began with a “mini-school” of pedagogical talks on particle physics, string theory, and mathematics. I found the mathematical talks by Erik Panzer particularly nice, it’s a topic I still feel quite weak on and he laid everything out in a very clear way. It seemed like a nice touch to include a “school” element in the conference, though I worry it ate too much into the time.
The rest of the content skewed more mathematical, and more string-theoretic, than these conferences have in the past. The mathematical content ranged from intriguing (including an interesting window into what it takes to get high-quality numerics) to intimidatingly obscure (large commutative diagrams, category theory on the first slide). String theory was arguably under-covered in prior years, but it felt over-covered this year. With the particle physics talks focusing on either general properties with perhaps some connections to elliptics, or to N=4 super Yang-Mills, it felt like we were missing the more “practical” talks from past conferences, where someone was computing something concrete in QCD and told us what they needed. Next year is in Mainz, so maybe those talks will reappear.
Now that I’ve rested up after this year’s Amplitudes, I’ll give a few of my impressions.
Overall, I think the conference went pretty well. People seemed amused by the digital Niels Bohr, even if he looked a bit like a puppet (Lance compared him to Yoda in his final speech, which was…apt). We used Gather.town, originally just for the poster session and a “virtual reception”, but later we also encouraged people to meet up in it during breaks. That in particular was a big hit: I think people really liked the ability to just move around and chat in impromptu groups, and while nobody seemed to use the “virtual bar”, the “virtual beach” had a lively crowd. Time zones were inevitably rough, but I think we ended up with a good compromise where everyone could still see a meaningful chunk of the conference.
A few things didn’t work as well. For those planning conferences, I would strongly suggest not making a brand new gmail account to send out conference announcements: for a lot of people the emails went straight to spam. Zulip was a bust: I’m not sure if people found it more confusing than last year’s Slack or didn’t notice it due to the spam issue, but almost no-one posted in it. YouTube was complicated: the stream went down a few times and I could never figure out exactly why, it may have just been internet issues here at the Niels Bohr Institute (we did have a power outage one night and had to scramble to get internet access back the next morning). As far as I could tell YouTube wouldn’t let me re-open the previous stream so each time I had to post a new link, which probably was frustrating for those following along there.
That said, this was less of a problem than it might have been, because attendance/”viewership” as a whole was lower than expected. Zoomplitudes last year had massive numbers of people join in both on Zoom and via YouTube. We had a lot fewer: out of over 500 registered participants, we had fewer than 200 on Zoom at any one time, and at most 30 or so on YouTube. Confusion around the conference email might have played a role here, but I suspect part of the difference is simple fatigue: after over a year of this pandemic, online conferences no longer feel like an exciting new experience.
The actual content of the conference ranged pretty widely. Some people reviewed earlier work, others presented recent papers or even work-in-progress. As in recent years, a meaningful chunk of the conference focused on applications of amplitudes techniques to gravitational wave physics. This included a talk by Thibault Damour, who has by now mostly made his peace with the field after his early doubts were sorted out. He still suspected that the mismatch of scales (weak coupling on the one hand, classical scattering on the other) would cause problems in future, but after his work withLaporta and Mastrolia even he had to acknowledge that amplitudes techniques were useful.
In the past I would have put the double-copy and gravitational wave researchers under the same heading, but this year they were quite distinct. While a few of the gravitational wave talks mentioned the double-copy, most of those who brought it up were doing something quite a bit more abstract than gravitational wave physics. Indeed, several people were pushing the boundaries of what it means to double-copy. There were modified KLT kernels, different versions of color-kinematics duality, and explorations of what kinds of massive particles can and (arguably more interestingly) cannot be compatible with a double-copy framework. The sheer range of different generalizations had me briefly wondering whether the double-copy could be “too flexible to be meaningful”, whether the right definitions would let you double-copy anything out of anything. I was reassured by the points where each talk argued that certain things didn’t work: it suggests that wherever this mysterious structure comes from, its powers are limited enough to make it meaningful.
A fair number of talks dealt with what has always been our main application, collider physics. There the context shifted, but the message stayed consistent: for a “clean” enough process two or three-loop calculations can make a big difference, taking a prediction that would be completely off from experiment and bringing it into line. These are more useful the more that can be varied about the calculation: functions are more useful than numbers, for example. I was gratified to hear confirmation that a particular kind of process, where two massless particles like quarks become three massive particles like W or Z bosons, is one of these “clean enough” examples: it means someone will need to compute my “tardigrade” diagram eventually.
If collider physics is our main application, N=4 super Yang-Mills has always been our main toy model. Jaroslav Trnka gave us the details behind Nima’s exciting talk from last year, and Nima had a whole new exciting talk this year with promised connections to category theory (connections he didn’t quite reach after speaking for two and a half hours). Anastasia Volovich presented two distinct methods for predicting square-root symbol letters, while my colleague Chi Zhang showed some exciting progress with the elliptic double-box, realizing the several-year dream of representing it in a useful basis of integrals and showcasing several interesting properties. Anne Spiering came over from the integrability side to show us just how special the “planar” version of the theory really is: by increasing the number of colors of gluons, she showed that one could smoothly go between an “integrability-esque” spectrum and a “chaotic” spectrum. Finally, Lance Dixon mentioned his progress with form-factors in his talk at the end of the conference, showing off some statistics of coefficients of different functions and speculating that machine learning might be able to predict them.
On the more mathematical side, Francis Brown showed us a new way to get numbers out of graphs, one distinct but related to our usual interpretation in terms of Feynman diagrams. I’m still unsure what it will be used for, but the fact that it maps every graph to something finite probably has some interesting implications. Albrecht Klemm and Claude Duhr talked about two sides of the same story, their recent work on integrals involving Calabi-Yau manifolds. They focused on a particular nice set of integrals, and time will tell whether the methods work more broadly, but there are some exciting suggestions that at least parts will.
There’s been a resurgence of the old dream of the S-matrix community, constraining amplitudes via “general constraints” alone, and several talks dealt with those ideas. Sebastian Mizera went the other direction, and tried to test one of those “general constraints”, seeing under which circumstances he could prove that you can swap a particle going in with an antiparticle going out. Others went out to infinity, trying to understand amplitudes from the perspective of the so-called “celestial sphere” where they appear to be governed by conformal field theories of some sort. A few talks dealt with amplitudes in string theory itself: Yvonne Geyer built them out of field-theory amplitudes, while Ashoke Sen explained how to include D-instantons in them.
We also had three “special talks” in the evenings. I’ve mentioned Nima’s already. Zvi Bern gave a retrospective talk that I somewhat cheesily describe as “good for the soul”: a look to the early days of the field that reminded us of why we are who we are. Lance Dixon closed the conference with a light-hearted summary and a look to the future. That future includes next year’s Amplitudes, which after a hasty discussion during this year’s conference has now localized to Prague. Let’s hope it’s in person!
Last week’s post came up on Reddit, where a commenter made a good point. I said that one of the mysteries of neutrinos is that they might not get their mass from the Higgs boson. This is true, but the commenter rightly points out it’s true of other particles too: electrons might not get their mass from the Higgs. We aren’t sure. The lighter quarks might not get their mass from the Higgs either.
When talking physics with the public, we usually say that electrons and quarks all get their mass from the Higgs. That’s how it works in our Standard Model, after all. But even though we’ve found the Higgs boson, we can’t be 100% sure that it functions the way our model says. That’s because there are aspects of the Higgs we haven’t been able to measure directly. We’ve measured how it affects the heaviest quark, the top quark, but measuring its interactions with other particles will require a bigger collider. Until we have those measurements, the possibility remains open that electrons and quarks get their mass another way. It would be a more complicated way: we know the Higgs does a lot of what the model says, so if it deviates in another way we’d have to add more details, maybe even more undiscovered particles. But it’s possible.
If I wanted to defend the idea that neutrinos are special here, I would point out that neutrino masses, unlike electron masses, are not part of the Standard Model. For electrons, we have a clear “default” way for them to get mass, and that default is in a meaningful way simpler than the alternatives. For neutrinos, every alternative is complicated in some fashion: either adding undiscovered particles, or unusual properties. If we were to invoke Occam’s Razor, the principle that we should always choose the simplest explanation, then for electrons and quarks there is a clear winner. Not so for neutrinos.
I’m not actually going to make this argument. That’s because I’m a bit wary of using Occam’s Razor when it comes to questions of fundamental physics. Occam’s Razor is a good principle to use, if you have a good idea of what’s “normal”. In physics, you don’t.
There are three men on a train. One of them is an economist and one of them is a logician and one of them is a mathematician. And they have just crossed the border into Scotland (I don’t know why they are going to Scotland) and they see a brown cow standing in a field from the window of the train (and the cow is standing parallel to the train). And the economist says, ‘Look, the cows in Scotland are brown.’ And the logician says, ‘No. There are cows in Scotland of which at least one is brown.’ And the mathematician says, ‘No. There is at least one cow in Scotland, of which one side appears to be brown.’
One side of this cow appears to be very fluffy.
If we want to be as careful as possible, the mathematician’s answer is best. But we expect not to have to be so careful. Maybe the economist’s answer, that Scottish cows are brown, is too broad. But we could imagine an agronomist who states “There is a breed of cows in Scotland that is brown”. And I suggest we should find that pretty reasonable. Essentially, we’re using Occam’s Razor: if we want to explain seeing a brown half-cow from a train, the simplest explanation would be that it’s a member of a breed of cows that are brown. It would be less simple if the cow were unique, a brown mutant in a breed of black and white cows. It would be even less simple if only one side of the cow were brown, and the other were another color.
When we use Occam’s Razor in this way, we’re drawing from our experience of cows. Most of the cows we meet are members of some breed or other, with similar characteristics. We don’t meet many mutant cows, or half-colored cows, so we think of those options as less simple, and less likely.
But what kind of experience tells us which option is simpler for electrons, or neutrinos?
The Standard Model is a type of theory called a Quantum Field Theory. We have experience with other Quantum Field Theories: we use them to describe materials, metals and fluids and so forth. Still, it seems a bit odd to say that if something is typical of these materials, it should also be typical of the universe. As another physicists in my sub-field, Nima Arkani-Hamed, likes to say, “the universe is not a crappy metal!”
We could also draw on our experience from other theories in physics. This is a bit more productive, but has other problems. Our other theories are invariably incomplete, that’s why we come up with new theories in the first place…and with so few theories, compared to breeds of cows, it’s unclear that we really have a good basis for experience.
Physicists like to brag that we study the most fundamental laws of nature. Ordinarily, this doesn’t matter as much as we pretend: there’s a lot to discover in the rest of science too, after all. But here, it really makes a difference. Unlike other fields, we don’t know what’s “normal”, so we can’t really tell which theories are “simpler” than others. We can make aesthetic judgements, on the simplicity of the math or the number of fields or the quality of the stories we can tell. If we want to be principled and forego all of that, then we’re left on an abyss, a world of bare observations and parameter soup.
If a physicist looks out a train window, will they say that all the electrons they see get their mass from the Higgs? Maybe, still. But they should be careful about it.
Last week I talked about the history of neutrinos. Neutrinos come in three types, or “flavors”. Electron neutrinos are the easiest: they’re produced alongside electrons and positrons in the different types of beta decay. Electrons have more massive cousins, called muon and tau particles. As it turns out, each of these cousins has a corresponding flavor of neutrino: muon neutrinos, and tau neutrinos.
For quite some time, physicists thought that all of these neutrinos had zero mass.
(If the idea of a particle with zero mass confuses you, think about photons. A particle with zero mass travels, like a photon, at the speed of light. This doesn’t make them immune to gravity: just as no light can escape a black hole, neither can any other massless particle. It turns out that once you take into account Einstein’s general theory of relativity, gravity cares about energy, not just mass.)
Eventually, physicists started to realize they were wrong, and neutrinos had a small non-zero mass after all. Their reason why might seem a bit strange, though. Physicists didn’t weigh the neutrinos, or measure their speed. Instead, they observed that different flavors of neutrinos transform into each other. We say that they oscillate: electron neutrinos oscillate into muon or tau neutrinos, which oscillate into the other flavors, and so on. Over time, a beam of electron neutrinos will become a beam of mostly tau and muon neutrinos, before becoming a beam of electron neutrinos again.
That might not sound like it has much to do with mass. To understand why it does, you’ll need to learn this post’s lesson:
Lesson 2: Mass is just How Particles Move
Oscillating particles seem like a weird sort of evidence for mass. What would be a more normal kind of evidence?
Those of you who’ve taken physics classes might remember the equation . Apply a known force to something, see how much it accelerates, and you can calculate its mass. If you’ve had a bit more physics, you’ll know that this isn’t quite the right equation to use for particles close to the speed of light, but that there are other equations we can use in a similar way. In particular, using relativity, we have . (At rest, , and we have the famous ). This lets us do the same kind of thing: give something a kick and see how it moves.
So let’s say we do that: we give a particle a kick, and measure it later. I’ll visualize this with a tool physicists use called a Feynman diagram. The line represents a particle traveling from one side to the other, from “kick” to “measurement”:
Because we only measure the particle at the end, we might miss if something happens in between. For example, it might interact with another particle or field, like this:
If we don’t know about this other field, then when we try to measure the particle’s mass we will include interactions like this. As it turns out, this is how the Higgs boson works: the Higgs field interacts with particles like electrons and quarks, changing how they move, so that they appear to have mass.
Quantum particles can do other things too. You might have heard people talk about one particle turning into a pair of temporary “virtual particles”. When people say that, they usually have a diagram in mind like this:
In particle physics, we need to take into account every diagram of this kind, every possible thing that could happen in between “kick” and measurement. The final result isn’t one path or another, but a sum of all the different things that could have happened in between. So when we measure the mass of a particle, we’re including every diagram that’s allowed: everything that starts with our “kick” and ends with our measurement.
Now what if our particle can transform, from one flavor to another?
Now we have a new type of thing that can happen in between “kick” and measurement. And if it can happen once, it can happen more than once:
Remember that, when we measure mass, we’re measuring a sum of all the things that can happen in between. That means our particle could oscillate back and forth between different flavors many many times, and we need to take every possibility into account. Because of that, it doesn’t actually make sense to ask what the mass is for one flavor, for just electron neutrinos or just muon neutrinos. Instead, mass is for the thing that actually moves: an average (actually, a quantum superposition) over all the different flavors, oscillating back and forth any number of times.
When a process like beta decay produces an electron neutrino, the thing that actually moves is a mix (again, a superposition) of particles with these different masses. Because each of these masses respond to their initial “kick” in different ways, you see different proportions of them over time. Try to measure different flavors at the end, and you’ll find different ones depending on when and where you measure. That’s the oscillation effect, and that’s why it means that neutrinos have mass.
It’s a bit more complicated to work out the math behind this, but not unreasonably so: it’s simpler than a lot of other physics calculations. Working through the math, we find that by measuring how long it takes neutrinos to oscillate we can calculate the differences between (squares of) neutrino masses. What we can’t calculate are the masses themselves. We know they’re small: neutrinos travel at almost the speed of light, and our cosmological models of the universe have surprisingly little room for massive neutrinos: too much mass, and our universe would look very different than it does today. But we don’t know much more than that. We don’t even know the order of the masses: you might assume electron neutrinos are on average lighter than muon neutrinos, which are lighter than tau neutrinos…but it could easily be the other way around! We also don’t know whether neutrinos get their mass from the Higgs like other particles do, or if they work in a completely different way.
Unlike other mysteries of physics, we’ll likely have the answer to some of these questions soon. People are already picking through the data from current experiments, seeing if they hint towards one order of masses or the other, or to one or the other way for neutrinos to get their mass. More experiments will start taking data this year, and others are expected to start later this decade. At some point, the textbooks may well have more “normal” mass numbers for each of the neutrinos. But until then, they serve as a nice illustration of what mass actually means in particle physics.
Some of the particles of the Standard Model are more familiar than others. Electrons and photons, of course, everyone has heard of, and most, though not all, have heard of quarks. Many of the rest, like the W and Z boson, only appear briefly in high-energy colliders. But one Standard Model particle is much less exotic, and nevertheless leads to all manner of confusion. That particle is the neutrino.
Neutrinos are very light, much lighter than even an electron. (Until relatively recently, we thought they were completely massless!) They have no electric charge and they don’t respond to the strong nuclear force, so aside from gravity (negligible since they’re so light), the only force that affects them is the weak nuclear force. This force is, well, weak. It means neutrinos can be produced via the relatively ordinary process of radioactive beta decay, but it also means they almost never interact with anything else. Vast numbers of neutrinos pass through you every moment, with no noticeable effect. We need enormous tanks of liquid or chunks of ice to have a chance of catching neutrinos in action.
Because neutrinos are both ordinary and unfamiliar, they tend to confuse people. I’d like to take advantage of this confusion to teach some physics. Neutrinos turn out to be a handy theme to convey a couple blog posts worth of lessons about why physics works the way it does.
I’ll start on the historical side. There’s a lesson that physicists themselves learned in the early days:
Lesson 1: Don’t Throw out a Well-Justified Conservation Law
In the early 20th century, physicists were just beginning to understand radioactivity. They could tell there were a few different types: gamma decay released photons in the form of gamma rays, alpha decay shot out heavy, positively charged particles, and beta decay made “beta particles”, or electrons. For each of these, physicists could track each particle and measure its energy and momentum. Everything made sense for gamma and alpha decay…but not for beta decay. Somehow, they could add up the energy of each of the particles they could track, and find less at the end than they did at the beginning. It was as if energy was not conserved.
These were the heady early days of quantum mechanics, so people were confused enough that many thought this was the end of the story. Maybe energy just isn’t conserved? Wolfgang Pauli, though, thought differently. He proposed that there had to be another particle, one that no-one could detect, that made energy balance out. It had to be neutral, so he called it the neutron…until two years later when James Chadwick discovered the particle we call the neutron. This was much too heavy to be Pauli’s neutron, so Edoardo Amaldi joked that Pauli’s particle was a “neutrino” instead. The name stuck, and Pauli kept insisting his neutrino would turn up somewhere. It wasn’t until 1956 that neutrinos were finally detected, so for quite a while people made fun of Pauli for his quixotic quest.
Including a Faust parody with Gretchen as the neutrino
In retrospect, people should probably have known better. Conservation of energy isn’t one of those rules that come out of nowhere. It’s deeply connected to time, and to the idea that one can perform the same experiment at any time in history and find the same result. While rules like that sometimes do turn out wrong, our first expectation should be that they won’t. Nowadays, we’re confident enough in energy conservation that we plan to use it to detect other particles: it was the main way the Large Hadron Collider planned to try to detect dark matter.
As we came to our more modern understanding, physicists started writing up the Standard Model. Neutrinos were thought of as massless, like photons, traveling at the speed of light. Now, we know that neutrinos have mass…but we don’t know how much mass they have. How do we know they have mass then? To understand that, you’ll need to understand what mass actually means in physics. We’ll talk about that next week!
Why do particle physicists need those enormous colliders? Why does it take a big, expensive, atom-smashing machine to discover what happens on the smallest scales?
A machine like the Large Hadron Collider seems pretty complicated. But at its heart, it’s basically just a huge microscope.
Familiar, right?
If you’ve ever used a microscope in school, you probably had one with a light switch. Forget to turn on the light, and you spend a while confused about why you can’t see anything before you finally remember to flick the switch. Just like seeing something normally, seeing something with a microscope means that light is bouncing off that thing and hitting your eyes. Because of this, microscopes are limited by the wavelength of the light that they use. Try to look at something much smaller than that wavelength and the image will be too blurry to understand.
To see smaller details then, people use light with smaller wavelengths. Using massive X-ray producing machines called synchrotrons, scientists can study matter on the sub-nanometer scale. To go further, scientists can take advantage of wave-particle duality, and use electrons instead of light. The higher the energy of the electrons, the smaller their wavelength. The best electron microscopes can see objects measured in angstroms, not just nanometers.
Less familiar?
A particle collider pushes this even further. The Large Hadron Collider accelerates protons until they have 6.5 Tera-electron-Volts of energy. That might be an unfamiliar type of unit, but if you’ve seen it before you can run the numbers, and estimate that this means the LHC can sees details below the attometer scale. That’s a quintillionth of a meter, or a hundred million times smaller than an atom.
A microscope isn’t just light, though, and a collider isn’t just high-energy protons. If it were, we could just wait and look at the sky. So-called cosmic rays are protons and other particles that travel to us from outer space. These can have very high energy: protons with similar energy to those in the LHC hit our atmosphere every day, and rays have been detected that were millions of times more powerful.
People sometimes ask why we can’t just use these cosmic rays to study particle physics. While we can certainly learn some things from cosmic rays, they have a big limitation. They have the “light” part of a microscope, but not the “lens”!
A microscope lens magnifies what you see. Starting from a tiny image, the lens blows it up until it’s big enough that you can see it with your own eyes. Particle colliders have similar technology, using their particle detectors. When two protons collider inside the LHC, they emit a flurry of other particles: photons and electrons, muons and mesons. Each of these particles is too small to see, let alone distinguish with the naked eye. But close to the collision there are detector machines that absorb these particles and magnify their signal. A single electron hitting one of these machines triggers a cascade of more and more electrons, in proportion to the energy of the electron that entered the machine. In the end, you get a strong electrical signal, which you can record with a computer. There are two big machines that do this at the Large Hadron Collider, each with its own independent scientific collaboration to run it. They’re called ATLAS and CMS.
The different layers of the CMS detector, magnifying signals from different types of particles.
So studying small scales needs two things: the right kind of “probe”, like light or protons, and a way to magnify the signal, like a lens or a particle detector. That’s hard to do without a big expensive machine…unless nature is unusually convenient. One interesting possibility is to try to learn about particle physics via astronomy. In the Big Bang particles collided with very high energy, and as the universe has expanded since then those details have been magnified across the sky. That kind of “cosmological collider” has the potential to teach us about physics at much smaller scales than any normal collider could reach. A downside is that, unlike in a collider, we can’t run the experiment over and over again: our “cosmological collider” only ran once. Still, if we want to learn about the very smallest scales, some day that may be our best option.
When you look into your mirror in the morning, the face looking back at you isn’t exactly your own. Your mirror image is flipped: left-handed if you’re right-handed, and right-handed if you’re left-handed. Your body is not symmetric in the mirror: we say it does not respect parity symmetry. Zoom in, and many of the molecules in your body also have a “handedness” to them: biology is not the same when flipped in a mirror.
What about physics? At first, you might expect the laws of physics themselves to respect parity symmetry. Newton’s laws are the same when reflected in a mirror, and so are Maxwell’s. But one part of physics breaks this rule: the weak nuclear force, the force that causes nuclear beta decay. The weak nuclear force interacts differently with “right-handed” and “left-handed” particles (shorthand for particles that spin counterclockwise or clockwise with respect to their motion). This came as a surprise to most physicists, but it was predicted by Tsung-Dao Lee and Chen-Ning Yang and demonstrated in 1956 by Chien-Shiung Wu, known in her day as the “Queen of Nuclear Research”. The world really does look different when flipped in a mirror.
I gave a lecture on the weak force for the pedagogy course I took a few weeks back. One piece of feedback I got was that the topic wasn’t very relatable. People wanted to know why they should care about the handedness of the weak force, they wanted to hear about “real-life” applications. Once scientists learned that the weak force didn’t respect parity, what did that let us do?
Thinking about this, I realized this is actually a pretty tricky story to tell. With enough time and background, I could explain that the “handedness” of the Standard Model is a major constraint on attempts to unify physics, ruling out a lot of the simpler options. That’s hard to fit in a short lecture though, and it still isn’t especially close to “real life”.
Then I realized I don’t need to talk about “real life” to give a “real-life example”. People explaining relativity get away with science fiction scenarios, spaceships on voyages to black holes. The key isn’t to be familiar, just relatable. If I can tell a story (with people in it), then maybe I can make this work.
All I need, then, is a person who cares a lot about the world behind a mirror.
In order to make this story work, we have to get Alice to care about the weak nuclear force. The most familiar thing the weak force does is cause beta decay. And the most familiar thing that undergoes beta decay is a banana. Bananas contain radioactive potassium, which can transform to calcium by emitting an electron and an anti-electron-neutrino.
The radioactive potassium from a banana doesn’t stay in the body very long, only a few hours at most. But if Alice was especially paranoid about radioactivity, maybe she would want to avoid eating bananas. (We shouldn’t tell her that other foods contain potassium too.) If so, she might view the looking glass as a golden opportunity, a chance to eat as many bananas as she likes without worrying about radiation.
Does this work?
A first problem: can Alice even eat mirror-reversed bananas? I told you many biological molecules have handedness, which led Alan Moore’s version of Alice to starve. If we assume, unlike Moore, that Alice comes back in her original configuration and survives, we should still ask if she gets any benefit out of the bananas in the looking glass.
Researching this, I found that the main thing that makes bananas taste “banana-ish”, isoamyl acetate, does not have handedness: mirror bananas will still taste like bananas. Fructose, a sugar in bananas, does have handedness however: it isn’t the same when flipped in a mirror. Chatting with a chemist, the impression I got was that this isn’t a total loss: often, flipping a sugar results in another, different sugar. A mirror banana might still taste sweet, but less so. Overall, it may still be worth eating.
The next problem is a tougher one: flipping a potassium atom doesn’t actually make it immune to the weak force. The weak force only interacts with left-handed particles and right-handed antiparticles: in beta decay, it transforms a left-handed down quark to a left-handed up quark, producing a left-handed electron and a right-handed anti-neutrino.
Alice would have been fine if all of the quarks in potassium were left-handed, but they aren’t: an equal amount are right-handed, so the mirror weak force will still act on them, and they will still undergo beta decay. Actually, it’s worse than that: quarks, and massive particles in general, don’t actually have a definite handedness. If you speed up enough to catch up to a quark and pass it, then from your perspective it’s now going in the opposite direction, and its handedness is flipped. The only particles with definite handedness are massless particles: those go at the speed of light, so you can never catch up to them. Another way to think about this is that quarks get their mass from the Higgs field, and this happens because the Higgs lets left- and right-handed quarks interact. What we call the quark’s mass is in some sense just left- and right-handed quarks constantly mixing back and forth.
Alice does have the opportunity to do something interesting here, if she can somehow capture the anti-neutrinos from those bananas. Our world appears to only have left-handed neutrinos and right-handed anti-neutrinos. This seemed reasonable when we thought neutrinos were massless, but now we know neutrinos have a (very small) mass. As a result, the hunt is on for right-handed neutrinos or left-handed anti-neutrinos: if we can measure them, we could fix one of the lingering mysteries of the Standard Model. With this in mind, Alice has the potential to really confuse some particle physicists, giving them some left-handed anti-neutrinos from beyond the looking-glass.
It turns out there’s a problem with even this scheme, though. The problem is a much wider one: the whole story is physically inconsistent.
I’d been acting like Alice can pass back and forth through the mirror, carrying all her particles with her. But what are “her particles”? If she carries a banana through the mirror, you might imagine the quarks in the potassium atoms carry over. But those quarks are constantly exchanging other quarks and gluons, as part of the strong force holding them together. They’re also exchanging photons with electrons via the electromagnetic force, and they’re also exchanging W bosons via beta decay. In quantum field theory, all of this is in some sense happening at once, an infinite sum over all possible exchanges. It doesn’t make sense to just carve out one set of particles and plug them in to different fields somewhere else.
If we actually wanted to describe a mirror like Alice’s looking glass in physics, we’d want to do it consistently. This is similar to how physicists think of time travel: you can’t go back in time and murder your grandparents because your whole path in space-time has to stay consistent. You can only go back and do things you “already did”. We treat space in a similar way to time. A mirror like Alice’s imposes a condition, that fields on one side are equal to their mirror image on the other side. Conditions like these get used in string theory on occasion, and they have broad implications for physics on the whole of space-time, not just near the boundary. The upshot is that a world with a mirror like Alice’s in it would be totally different from a world without the looking glass: the weak force as we know it would not exist.
So unfortunately, I still don’t have a good “real life” story for a class about parity symmetry. It’s fun trying to follow Alice through a parity transformation, but there are a few too many problems for the tale to make any real sense. Feel free to suggest improvements!
For a long time, physicists only knew about two fundamental forces: electromagnetism, and gravity. Physics students follow the same path, studying Newtonian gravity, then E&M, and only later learning about the other fundamental forces. If you’ve just recently heard about the weak nuclear force and the strong nuclear force, it can be tempting to think of them as just slight tweaks on electromagnetism. But while that can be a helpful way to start, in a way it’s precisely backwards. Electromagnetism is simpler than the other forces, that’s true. But because of that simplicity, it’s actually pretty weird as a force.
The weirdness of electromagnetism boils down to one key reason: the electromagnetic field has no charge.
Maybe that sounds weird to you: if you’ve done anything with electromagnetism, you’ve certainly seen charges. But while you’ve calculated the field produced by a charge, the field itself has no charge. You can specify the positions of some electrons and not have to worry that the electric field will introduce new charges you didn’t plan. Mathematically, this means your equations are linear in the field, and thus not all that hard to solve.
The other forces are different. The strong nuclear force has three types of charge, dubbed red, green, and blue. Not just quarks, but the field itself has charges under this system, making the equations that describe it non-linear.
A depiction of a singlet state
Those properties mean that you can’t just think of the strong force as a push or pull between charges, like you could with electromagnetism. The strong force doesn’t just move quarks around, it can change their color, exchanging charge between the quark and the field. That’s one reason why when we’re more careful we refer to it as not the strong force, but the strong interaction.
The weak force also makes more sense when thought of as an interaction. It can change even more properties of particles, turning different flavors of quarks and leptons into each other, resulting in among other phenomena nuclear beta decay. It would be even more like the strong force, but the Higgs field screws that up, stirring together two more fundamental forces and spitting out the weak force and electromagnetism. The result ties them together in weird ways: for example, it means that the weak field can actually have an electric charge.
Interactions like the strong and weak forces are much more “normal” for particle physicists: if you ask us to picture a random fundamental force, chances are it will look like them. It won’t typically look like electromagnetism, the weird “degenerate” case with a field that doesn’t even have a charge. So despite how familiar electromagnetism may be to you, don’t take it as your model of what a fundamental force should look like: of all the forces, it’s the simplest and weirdest.
Technically, we don’t know yet. The ALPHA-g experiment would have been the first to check this, making anti-hydrogen by trapping anti-protons and positrons in a long tube and seeing which way it falls. While they got most of their setup working, the LHC complex shut down before they could finish. It starts up again next month, so we should have our answer soon.
That said, for most theorists’ purposes, we absolutely do know: antimatter falls down. Antimatter is one of the cleanest examples of a prediction from pure theory that was confirmed by experiment. When Paul Dirac first tried to write down an equation that described electrons, he found the math forced him to add another particle with the opposite charge. With no such particle in sight, he speculated it could be the proton (this doesn’t work, they need the same mass), before Carl D. Anderson discovered the positron in 1932.
The same math that forced Dirac to add antimatter also tells us which way it falls. There’s a bit more involved, in the form of general relativity, but the recipe is pretty simple: we know how to take an equation like Dirac’s and add gravity to it, and we have enough practice doing it in different situations that we’re pretty sure it’s the right way to go. Pretty sure doesn’t mean 100% sure: talk to the right theorists, and you’ll probably find a proposal or two in which antimatter falls up instead of down. But they tend to be pretty weird proposals, from pretty weird theorists.
Ok, but if those theorists are that “weird”, that outside the mainstream, why does an experiment like ALPHA-g exist? Why does it happen at CERN, one of the flagship facilities for all of mainstream particle physics?
This gets at a misconception I occasionally hear from critics of the physics mainstream. They worry about groupthink among mainstream theorists, the physics community dismissing good ideas just because they’re not trendy (you may think I did that just now, for antigravity antimatter!) They expect this to result in a self-fulfilling prophecy where nobody tests ideas outside the mainstream, so they find no evidence for them, so they keep dismissing them.
The mistake of these critics is in assuming that what gets tested has anything to do with what theorists think is reasonable.
Theorists talk to experimentalists, sure. We motivate them, give them ideas and justification. But ultimately, people do experiments because they can do experiments. I watched a talk about the ALPHA experiment recently, and one thing that struck me was how so many different techniques play into it. They make antiprotons using a proton beam from the accelerator, slow them down with magnetic fields, and cool them with lasers. They trap their antihydrogen in an extremely precise vacuum, and confirm it’s there with particle detectors. The whole setup is a blend of cutting-edge accelerator physics and cutting-edge tricks for manipulating atoms. At its heart, ALPHA-g feels like its primary goal is to stress-test all of those tricks: to push the state of the art in a dozen experimental techniques in order to accomplish something remarkable.
And so even if the mainstream theorists don’t care, ALPHA will keep going. It will keep getting funding, it will keep getting visited by celebrities and inspiring pop fiction. Because enough people recognize that doing something difficult can be its own reward.
In my experience, this motivation applies to theorists too. Plenty of us will dismiss this or that proposal as unlikely or impossible. But give us a concrete calculation, something that lets us use one of our flashy theoretical techniques, and the tune changes. If we’re getting the chance to develop our tools, and get a paper out of it in the process, then sure, we’ll check your wacky claim. Why not?
I suspect critics of the mainstream would have a lot more success with this kind of pitch-based approach. If you can find a theorist who already has the right method, who’s developing and extending it and looking for interesting applications, then make your pitch: tell them how they can answer your question just by doing what they do best. They’ll think of it as a chance to disprove you, and you should let them, that’s the right attitude to take as a scientist anyway. It’ll work a lot better than accusing them of hogging the grant money.
Yesterday, Fermilab’s Muon g-2 experiment announced a new measurement of the magnetic moment of the muon, a number which describes how muons interact with magnetic fields. For what might seem like a small technical detail, physicists have been very excited about this measurement because it’s a small technical detail that the Standard Model seems to get wrong, making it a potential hint of new undiscovered particles. Quanta magazine has a great piece on the announcement, which explains more than I will here, but the upshot is that there are two different calculations on the market that attempt to predict the magnetic moment of the muon. One of them, using older methods, disagrees with the experiment. The other, with a new approach, agrees. The question then becomes, which calculation was wrong? And why?
What does it mean for a prediction to match an experimental result? The simple, wrong, answer is that the numbers must be equal: if you predict “3”, the experiment has to measure “3”. The reason why this is wrong is that in practice, every experiment and every prediction has some uncertainty. If you’ve taken a college physics class, you’ve run into this kind of uncertainty in one of its simplest forms, measurement uncertainty. Measure with a ruler, and you can only confidently measure down to the smallest divisions on the ruler. If you measure 3cm, but your ruler has ticks only down to a millimeter, then what you’re measuring might be as large as 3.1cm or as small as 2.9 cm. You just don’t know.
This uncertainty doesn’t mean you throw up your hands and give up. Instead, you estimate the effect it can have. You report, not a measurement of 3cm, but of 3cm plus or minus 1mm. If the prediction was 2.9cm, then you’re fine: it falls within your measurement uncertainty.
Measurements aren’t the only thing that can be uncertain. Predictions have uncertainty too, theoretical uncertainty. Sometimes, this comes from uncertainty on a previous measurement: if you make a prediction based on that experiment that measured 3cm plus or minus 1mm, you have to take that plus or minus into account and estimate its effect (we call this propagation of errors). Sometimes, the uncertainty comes instead from an approximation you’re making. In particle physics, we sometimes approximate interactions between different particles with diagrams, beginning with the simplest diagrams and adding on more complicated ones as we go. To estimate the uncertainty there, we estimate the size of the diagrams we left out, the more complicated ones we haven’t calculated yet. Other times, that approximation doesn’t work, and we need to use a different approximation, treating space and time as a finite grid where we can do computer simulations. In that case, you can estimate your uncertainty based on how small you made your grid. The new approach to predicting the muon magnetic moment uses that kind of approximation.
There’s a common thread in all of these uncertainty estimates: you don’t expect to be too far off on average. Your measurements won’t be perfect, but they won’t all be screwed up in the same way either: chances are, they will randomly be a little below or a little above the truth. Your calculations are similar: whether you’re ignoring complicated particle physics diagrams or the spacing in a simulated grid, you can treat the difference as something small and random. That randomness means you can use statistics to talk about your errors: you have statistical uncertainty. When you have statistical uncertainty, you can estimate, not just how far off you might get, but how likely it is you ended up that far off. In particle physics, we have very strict standards for this kind of thing: to call something new a discovery, we demand that it is so unlikely that it would only show up randomly under the old theory roughly one in a million times. The muon magnetic moment isn’t quite up to our standards for a discovery yet, but the new measurement brought it closer.
The two dueling predictions for the muon’s magnetic moment both estimate some amount of statistical uncertainty. It’s possible that the two calculations just disagree due to chance, and that better measurements or a tighter simulation grid would make them agree. Given their estimates, though, that’s unlikely. That takes us from the realm of theoretical uncertainty, and into uncertainty about the theoretical. The two calculations use very different approaches. The new calculation tries to compute things from first principles, using the Standard Model directly. The risk is that such a calculation needs to make assumptions, ignoring some effects that are too difficult to calculate, and one of those assumptions may be wrong. The older calculation is based more on experimental results, using different experiments to estimate effects that are hard to calculate but that should be similar between different situations. The risk is that the situations may be less similar than expected, their assumptions breaking down in a way that the bottom-up calculation could catch.
None of these risks are easy to estimate. They’re “unknown unknowns”, or rather, “uncertain uncertainties”. And until some of them are resolved, it won’t be clear whether Fermilab’s new measurement is a sign of undiscovered particles, or just a (challenging!) confirmation of the Standard Model.