Category Archives: General QFT

Trapped in the (S) Matrix

I’ve tried to convince you that you are a particle detector. You choose your experiment, what actions you take, and then observe the outcome. If you focus on that view of yourself, data out and data in, you start to wonder if the world outside really has any meaning. Maybe you’re just trapped in the Matrix.

From a physics perspective, you actually are trapped in a sort of a Matrix. We call it the S Matrix.

“S” stands for scattering. The S Matrix is a formula we use, a mathematical tool that tells us what happens when fundamental particles scatter: when they fly towards each other, colliding or bouncing off. For each action we could take, the S Matrix gives the probability of each outcome: for each pair of particles we collide, the chance we detect different particles at the end. You can imagine putting every possible action in a giant vector, and every possible observation in another giant vector. Arrange the probabilities for each action-observation pair in a big square grid, and that’s a matrix.

Actually, I lied a little bit. This is particle physics, and particle physics uses quantum mechanics. Because of that, the entries of the S Matrix aren’t probabilities: they’re complex numbers called probability amplitudes. You have to multiply them by their complex conjugate to get probability out.

Ok, that probably seemed like a lot of detail. Why am I telling you all this?

What happens when you multiply the whole S Matrix by its complex conjugate? (Using matrix multiplication, naturally.) You can still pick your action, but now you’re adding up every possible outcome. You’re asking “suppose I take an action. What’s the chance that anything happens at all?”

The answer to that question is 1. There is a 100% chance that something happens, no matter what you do. That’s just how probability works.

We call this property unitarity, the property of giving “unity”, or one. And while it may seem obvious, it isn’t always so easy. That’s because we don’t actually know the S Matrix formula most of the time. We have to approximate it, a partial formula that only works for some situations. And unitarity can tell us how much we can trust that formula.

Imagine doing an experiment trying to detect neutrinos, like the IceCube Neutrino Observatory. For you to detect the neutrinos, they must scatter off of electrons, kicking them off of their atoms or transforming them into another charged particle. You can then notice what happens as the energy of the neutrinos increases. If you do that, you’ll notice the probability also start to increase: it gets more and more likely that the neutrino can scatter an electron. You might propose a formula for this, one that grows with energy. [EDIT: Example changed after a commenter pointed out an issue with it.]

If you keep increasing the energy, though, you run into a problem. Those probabilities you predict are going to keep increasing. Eventually, you’ll predict a probability greater than one.

That tells you that your theory might have been fine before, but doesn’t work for every situation. There’s something you don’t know about, which will change your formula when the energy gets high. You’ve violated unitarity, and you need to fix your theory.

In this case, the fix is already known. Neutrinos and electrons interact due to another particle, called the W boson. If you include that particle, then you fix the problem: your probabilities stop going up and up, instead, they start slowing down, and stay below one.

For other theories, we don’t yet know the fix. Try to write down an S Matrix for colliding gravitational waves (or really, gravitons), and you meet the same kind of problem, a probability that just keeps growing. Currently, we don’t know how that problem should be solved: string theory is one answer, but may not be the only one.

So even if you’re trapped in an S Matrix, sending data out and data in, you can still use logic. You can still demand that probability makes sense, that your matrix never gives a chance greater than 100%. And you can learn something about physics when you do!

You Are a Particle Detector

I mean that literally. True, you aren’t a 7,000 ton assembly of wires and silicon, like the ATLAS experiment inside the Large Hadron Collider. You aren’t managed by thousands of scientists and engineers, trying to sift through data from a billion pairs of protons smashing into each other every second. Nonetheless, you are a particle detector. Your senses detect particles.

Like you, and not like you

Your ears take vibrations in the air and magnify them, vibrating the fluid of your inner ear. Tiny hairs communicate that vibration to your nerves, which signal your brain. Particle detectors, too, magnify signals: photomultipliers take a single particle of light (called a photon) and set off a cascade, multiplying the signal one hundred million times so it can be registered by a computer.

Your nose and tongue are sensitive to specific chemicals, recognizing particular shapes and ignoring others. A particle detector must also be picky. A detector like ATLAS measures far more particle collisions than it could ever record. Instead, it learns to recognize particular “shapes”, collisions that might hold evidence of something interesting. Only those collisions are recorded, passed along to computer centers around the world.

Your sense of touch tells you something about the energy of a collision: specifically, the energy things have when they collide with you. Particle detectors do this with calorimeters, that generate signals based on a particle’s energy. Different parts of your body are more sensitive than others: your mouth and hands are much more sensitive than your back and shoulders. Different parts of a particle detector have different calorimeters: an electromagnetic calorimeter for particles like electrons, and a less sensitive hadronic calorimeter that can catch particles like protons.

You are most like a particle detector, though, in your eyes. The cells of your eyes, rods and cones, detect light, and thus detect photons. Your eyes are more sensitive than you think: you are likely able to detect even a single photon. In an experiment, three people sat in darkness for forty minutes, then heard two sounds, one of which might come accompanied by a single photon of light flashed into their eye. The three didn’t notice the photons every time, that’s not possible for such a small sensation: but they did much better than a random guess.

(You can be even more literal than that. An older professor here told me stories of the early days of particle physics. To check that a machine was on, sometimes physicists would come close, and watch for flashes in the corner of their vision: a sign of electrons flying through their eyeballs!)

You are a particle detector, but you aren’t just a particle detector. A particle detector can’t move, its thousands of tons are fixed in place. That gives it blind spots: for example, the tube that the particles travel through is clear, with no detectors in it, so the particle can get through. Physicists have to account for this, correcting for the missing space in their calculations. In contrast, if you have a blind spot, you can act: move, and see the world from a new point of view. You observe not merely a series of particles, but the results of your actions: what happens when you turn one way or another, when you make one choice or another.

So while you are a particle detector, what’s more, you’re a particle experiment. You can learn a lot more than those big heaps of wires and silicon could on their own. You’re like the whole scientific effort: colliders and detectors, data centers and scientists around the world. May you learn as much in your life as the experiments do in theirs.

W is for Why???

Have you heard the news about the W boson?

The W boson is a fundamental particle, part of the Standard Model of particle physics. It is what we call a “force-carrying boson”, a particle related to the weak nuclear force in the same way photons are related to electromagnetism. Unlike photons, W bosons are “heavy”: they have a mass. We can’t usually predict masses of particles, but the W boson is a bit different, because its mass comes from the Higgs boson in a special way, one that ties it to the masses of other particles like the Z boson. The upshot is that if you know the mass of a few other particles, you can predict the mass of the W.

And according to a recent publication, that prediction is wrong. A team analyzed results from an old experiment called the Tevatron, the biggest predecessor of today’s Large Hadron Collider. They treated the data with groundbreaking care, mindbogglingly even taking into account the shape of the machine’s wires. And after all that analysis, they found that the W bosons detected by the Tevatron had a different mass than the mass predicted by the Standard Model.

How different? Here’s where precision comes in. In physics, we decide whether to trust a measurement with a statistical tool. We calculate how likely the measurement would be, if it was an accident. In this case: how likely it would be that, if the Standard Model was correct, the measurement would still come out this way? To discover a new particle, we require this chance to be about one in 3.5 million, or in our jargon, five sigma. That was the requirement for discovering the Higgs boson. This super-precise measurement of the W boson doesn’t have five sigma…it has seven sigma. That means, if we trust the analysis team, then a measurement like this could come accidentally out of the Standard Model only about one in a trillion times.

Ok, should we trust the analysis team?

If you want to know that, I’m the wrong physicist to ask. The right physicists are experimental particle physicists. They do analyses like that one, and they know what can go wrong. Everyone I’ve heard from in that field emphasized that this was a very careful group, who did a lot of things impressively right…but there is still room for mistakes. One pointed out that the new measurement isn’t just inconsistent with the Standard Model, but with many previous measurements too. Those measurements are less precise, but still precise enough that we should be a bit skeptical. Another went into more detail about specific clues as to what might have gone wrong.

If you can’t find an particle experimentalist, the next best choice is a particle phenomenologist. These are the people who try to make predictions for new experiments, who use theoretical physics to propose new models that future experiments can test. Here’s one giving a first impression, and discussing some ways to edit the Standard Model to agree with the new measurement. Here’s another discussing what to me is an even more interesting question: if we take these measurements seriously, both the new one and the old ones, then what do we believe?

I’m not an experimentalist or a phenomenologist. I’m an “amplitudeologist”. I work not on the data, or the predictions, but the calculational tools used to make those predictions, called “scattering amplitudes”. And that gives me a different view on the situation.

See in my field, precision is one of our biggest selling-points. If you want theoretical predictions to match precise experiments, you need our tricks to compute them. We believe (and argue to grant agencies) that this precision will be important: if a precise experiment and a precise prediction disagree, it could be the first clue to something truly new. New solid evidence of something beyond the Standard Model would revitalize all of particle physics, giving us a concrete goal and killing fruitless speculation.

This result shakes my faith in that a little. Probably, the analysis team got something wrong. Possibly, all previous analyses got something wrong. Either way, a lot of very careful smart people tried to estimate their precision, got very confident…and got it wrong.

(There’s one more alternative: maybe million-to-one chances really do crop up nine times out of ten.)

If some future analysis digs down deep in precision, and finds another deviation from the Standard Model, should we trust it? What if it’s measuring something new, and we don’t have the prior experiments to compare to?

(This would happen if we build a new even higher-energy collider. There are things the collider could measure, like the chance one Higgs boson splits into two, that we could not measure with any earlier machine. If we measured that, we couldn’t compare it to the Tevatron or the LHC, we’d have only the new collider to go on.)

Statistics are supposed to tell us whether to trust a result. Here, they’re not doing their job. And that creates the scary possibility that some anomaly shows up, some real deviation deep in the sigmas that hints at a whole new path for the field…and we just end up bickering about who screwed it up. Or the equally scary possibility that we find a seven-sigma signal of some amazing new physics, build decades of new theories on it…and it isn’t actually real.

We don’t just trust statistics. We also trust the things normal people trust. Do other teams find the same result? (I hope that they’re trying to get to this same precision here, and see what went wrong!) Does the result match other experiments? Does it make predictions, which then get tested in future experiments?

All of those are heuristics of course. Nothing can guarantee that we measure the truth. Each trick just corrects for some of our biases, some of the ways we make mistakes. We have to hope that’s good enough, that if there’s something to see we’ll see it, and if there’s nothing to see we won’t. Precision, my field’s raison d’être, can’t be enough to convince us by itself. But it can help.

Discovering New Elements, Discovering New Particles

In school, you learn that the world around you is made up of chemical elements. There’s oxygen and nitrogen in the air, hydrogen and oxygen in water, sodium and chlorine in salt, and carbon in all living things. Other elements are more rare. Often, that’s because they’re unstable, due to radioactivity, like the plutonium in a bomb or americium in a smoke detector. The heaviest elements are artificial, produced in tiny amounts by massive experiments. In 2002, the heaviest element yet was found at the Joint Institute for Nuclear Research near Moscow. It was later named Oganesson, after the scientist who figured out how to make these heavy elements, Yuri Oganessian. To keep track of the different elements, we organize them in the periodic table like this:

In that same school, you probably also learn that the elements aren’t quite so elementary. Unlike the atoms imagined by the ancient Greeks, our atoms are made of smaller parts: protons and neutrons, surrounded by a cloud of electrons. They’re what give the periodic table its periodic structure, the way it repeats from row to row, with each different element having a different number of protons.

If your school is a bit more daring, you also learn that protons and neutrons themselves aren’t elementary. Each one is made of smaller particles called quarks: a proton of two “up quarks” and one “down quark”, and a neutron of two “down” and one “up”. Up quarks, down quarks, and electrons are all what physicists call fundamental particles, and they make up everything you see around you. Just like the chemical elements, some fundamental particles are more obscure than others, and the heaviest ones are all very unstable, produced temporarily by particle collider experiments. The most recent particle to be discovered was in 2012, when the Large Hadron Collider near Geneva found the Higgs boson. The Higgs boson is named after Peter Higgs, one of those who predicted it back in the 60’s. All the fundamental particles we know are part of something called the Standard Model, which we sometimes organize in a table like this:

So far, these stories probably sound similar. The experiments might not even sound that different: the Moscow experiment shoots a beam of high-energy calcium atoms at a target of heavy radioactive elements, while the Geneva one shoots a beam of high-energy protons at another beam of high-energy protons. With all those high-energy beams, what’s the difference really?

In fact there is a big different between chemical elements and fundamental particles, and between the periodic table and the Standard Model. The latter are fundamental, the former are not.

When they made new chemical elements, scientists needed to start with a lot of protons and neutrons. That’s why they used calcium atoms in their beam, and even heavier elements as their target. We know that heavy elements are heavy because they contain more protons and neutrons, and we can use the arrangement of those protons and neutrons to try to predict their properties. That’s why, even though only five or six oganesson atoms have been detected, scientists have some idea what kind of material it would make. Oganesson is a noble gas, like helium, neon, and radon. But calculations predict it is actually a solid at room temperature. What’s more, it’s expected to be able to react with other elements, something the other noble gases are very reluctant to do.

The Standard Model has patterns, just like the chemical elements. Each matter particle is one of three “generations”, each heavier and more unstable: for example, electrons have heavier relatives called muons, and still heavier ones called tauons. But unlike with the elements, we don’t know where these patterns come from. We can’t explain them with smaller particles, like we could explain the elements with protons and neutrons. We think the Standard Model particles might actually be fundamental, not made of anything smaller.

That’s why when we make them, we don’t need a lot of other particles: just two protons, each made of three quarks, is enough. With that, we can make not just new arrangements of quarks, but new particles altogether. Some are even heavier than the protons we started with: the Higgs boson is more than a hundred times as heavy as a proton! We can do this because, in particle physics, mass isn’t conserved: mass is just another type of energy, and you can turn one type of energy into another.

Discovering new elements is hard work, but discovering new particles is on another level. It’s hard to calculate which elements are stable or unstable, and what their properties might be. But we know the rules, and with enough skill and time we could figure it out. In particle physics, we don’t know the rules. We have some good guesses, simple models to solve specific problems, and sometimes, like with the Higgs, we’re right. But despite making many more than five or six Higgs bosons, we still aren’t sure it has the properties we expect. We don’t know the rules. Even with skill and time, we can’t just calculate what to expect. We have to discover it.

Lessons From Neutrinos, Part II

Last week I talked about the history of neutrinos. Neutrinos come in three types, or “flavors”. Electron neutrinos are the easiest: they’re produced alongside electrons and positrons in the different types of beta decay. Electrons have more massive cousins, called muon and tau particles. As it turns out, each of these cousins has a corresponding flavor of neutrino: muon neutrinos, and tau neutrinos.

For quite some time, physicists thought that all of these neutrinos had zero mass.

(If the idea of a particle with zero mass confuses you, think about photons. A particle with zero mass travels, like a photon, at the speed of light. This doesn’t make them immune to gravity: just as no light can escape a black hole, neither can any other massless particle. It turns out that once you take into account Einstein’s general theory of relativity, gravity cares about energy, not just mass.)

Eventually, physicists started to realize they were wrong, and neutrinos had a small non-zero mass after all. Their reason why might seem a bit strange, though. Physicists didn’t weigh the neutrinos, or measure their speed. Instead, they observed that different flavors of neutrinos transform into each other. We say that they oscillate: electron neutrinos oscillate into muon or tau neutrinos, which oscillate into the other flavors, and so on. Over time, a beam of electron neutrinos will become a beam of mostly tau and muon neutrinos, before becoming a beam of electron neutrinos again.

That might not sound like it has much to do with mass. To understand why it does, you’ll need to learn this post’s lesson:

Lesson 2: Mass is just How Particles Move

Oscillating particles seem like a weird sort of evidence for mass. What would be a more normal kind of evidence?

Those of you who’ve taken physics classes might remember the equation F=ma. Apply a known force to something, see how much it accelerates, and you can calculate its mass. If you’ve had a bit more physics, you’ll know that this isn’t quite the right equation to use for particles close to the speed of light, but that there are other equations we can use in a similar way. In particular, using relativity, we have E^2=p^2 c^2 + m^2 c^4. (At rest, p=0, and we have the famous E=mc^2). This lets us do the same kind of thing: give something a kick and see how it moves.

So let’s say we do that: we give a particle a kick, and measure it later. I’ll visualize this with a tool physicists use called a Feynman diagram. The line represents a particle traveling from one side to the other, from “kick” to “measurement”:

Because we only measure the particle at the end, we might miss if something happens in between. For example, it might interact with another particle or field, like this:

If we don’t know about this other field, then when we try to measure the particle’s mass we will include interactions like this. As it turns out, this is how the Higgs boson works: the Higgs field interacts with particles like electrons and quarks, changing how they move, so that they appear to have mass.

Quantum particles can do other things too. You might have heard people talk about one particle turning into a pair of temporary “virtual particles”. When people say that, they usually have a diagram in mind like this:

In particle physics, we need to take into account every diagram of this kind, every possible thing that could happen in between “kick” and measurement. The final result isn’t one path or another, but a sum of all the different things that could have happened in between. So when we measure the mass of a particle, we’re including every diagram that’s allowed: everything that starts with our “kick” and ends with our measurement.

Now what if our particle can transform, from one flavor to another?

Now we have a new type of thing that can happen in between “kick” and measurement. And if it can happen once, it can happen more than once:

Remember that, when we measure mass, we’re measuring a sum of all the things that can happen in between. That means our particle could oscillate back and forth between different flavors many many times, and we need to take every possibility into account. Because of that, it doesn’t actually make sense to ask what the mass is for one flavor, for just electron neutrinos or just muon neutrinos. Instead, mass is for the thing that actually moves: an average (actually, a quantum superposition) over all the different flavors, oscillating back and forth any number of times.

When a process like beta decay produces an electron neutrino, the thing that actually moves is a mix (again, a superposition) of particles with these different masses. Because each of these masses respond to their initial “kick” in different ways, you see different proportions of them over time. Try to measure different flavors at the end, and you’ll find different ones depending on when and where you measure. That’s the oscillation effect, and that’s why it means that neutrinos have mass.

It’s a bit more complicated to work out the math behind this, but not unreasonably so: it’s simpler than a lot of other physics calculations. Working through the math, we find that by measuring how long it takes neutrinos to oscillate we can calculate the differences between (squares of) neutrino masses. What we can’t calculate are the masses themselves. We know they’re small: neutrinos travel at almost the speed of light, and our cosmological models of the universe have surprisingly little room for massive neutrinos: too much mass, and our universe would look very different than it does today. But we don’t know much more than that. We don’t even know the order of the masses: you might assume electron neutrinos are on average lighter than muon neutrinos, which are lighter than tau neutrinos…but it could easily be the other way around! We also don’t know whether neutrinos get their mass from the Higgs like other particles do, or if they work in a completely different way.

Unlike other mysteries of physics, we’ll likely have the answer to some of these questions soon. People are already picking through the data from current experiments, seeing if they hint towards one order of masses or the other, or to one or the other way for neutrinos to get their mass. More experiments will start taking data this year, and others are expected to start later this decade. At some point, the textbooks may well have more “normal” mass numbers for each of the neutrinos. But until then, they serve as a nice illustration of what mass actually means in particle physics.

Lessons From Neutrinos, Part I

Some of the particles of the Standard Model are more familiar than others. Electrons and photons, of course, everyone has heard of, and most, though not all, have heard of quarks. Many of the rest, like the W and Z boson, only appear briefly in high-energy colliders. But one Standard Model particle is much less exotic, and nevertheless leads to all manner of confusion. That particle is the neutrino.

Neutrinos are very light, much lighter than even an electron. (Until relatively recently, we thought they were completely massless!) They have no electric charge and they don’t respond to the strong nuclear force, so aside from gravity (negligible since they’re so light), the only force that affects them is the weak nuclear force. This force is, well, weak. It means neutrinos can be produced via the relatively ordinary process of radioactive beta decay, but it also means they almost never interact with anything else. Vast numbers of neutrinos pass through you every moment, with no noticeable effect. We need enormous tanks of liquid or chunks of ice to have a chance of catching neutrinos in action.

Because neutrinos are both ordinary and unfamiliar, they tend to confuse people. I’d like to take advantage of this confusion to teach some physics. Neutrinos turn out to be a handy theme to convey a couple blog posts worth of lessons about why physics works the way it does.

I’ll start on the historical side. There’s a lesson that physicists themselves learned in the early days:

Lesson 1: Don’t Throw out a Well-Justified Conservation Law

In the early 20th century, physicists were just beginning to understand radioactivity. They could tell there were a few different types: gamma decay released photons in the form of gamma rays, alpha decay shot out heavy, positively charged particles, and beta decay made “beta particles”, or electrons. For each of these, physicists could track each particle and measure its energy and momentum. Everything made sense for gamma and alpha decay…but not for beta decay. Somehow, they could add up the energy of each of the particles they could track, and find less at the end than they did at the beginning. It was as if energy was not conserved.

These were the heady early days of quantum mechanics, so people were confused enough that many thought this was the end of the story. Maybe energy just isn’t conserved? Wolfgang Pauli, though, thought differently. He proposed that there had to be another particle, one that no-one could detect, that made energy balance out. It had to be neutral, so he called it the neutron…until two years later when James Chadwick discovered the particle we call the neutron. This was much too heavy to be Pauli’s neutron, so Edoardo Amaldi joked that Pauli’s particle was a “neutrino” instead. The name stuck, and Pauli kept insisting his neutrino would turn up somewhere. It wasn’t until 1956 that neutrinos were finally detected, so for quite a while people made fun of Pauli for his quixotic quest.

Including a Faust parody with Gretchen as the neutrino

In retrospect, people should probably have known better. Conservation of energy isn’t one of those rules that come out of nowhere. It’s deeply connected to time, and to the idea that one can perform the same experiment at any time in history and find the same result. While rules like that sometimes do turn out wrong, our first expectation should be that they won’t. Nowadays, we’re confident enough in energy conservation that we plan to use it to detect other particles: it was the main way the Large Hadron Collider planned to try to detect dark matter.

As we came to our more modern understanding, physicists started writing up the Standard Model. Neutrinos were thought of as massless, like photons, traveling at the speed of light. Now, we know that neutrinos have mass…but we don’t know how much mass they have. How do we know they have mass then? To understand that, you’ll need to understand what mass actually means in physics. We’ll talk about that next week!

Light and Lens, Collider and Detector

Why do particle physicists need those enormous colliders? Why does it take a big, expensive, atom-smashing machine to discover what happens on the smallest scales?

A machine like the Large Hadron Collider seems pretty complicated. But at its heart, it’s basically just a huge microscope.

Familiar, right?

If you’ve ever used a microscope in school, you probably had one with a light switch. Forget to turn on the light, and you spend a while confused about why you can’t see anything before you finally remember to flick the switch. Just like seeing something normally, seeing something with a microscope means that light is bouncing off that thing and hitting your eyes. Because of this, microscopes are limited by the wavelength of the light that they use. Try to look at something much smaller than that wavelength and the image will be too blurry to understand.

To see smaller details then, people use light with smaller wavelengths. Using massive X-ray producing machines called synchrotrons, scientists can study matter on the sub-nanometer scale. To go further, scientists can take advantage of wave-particle duality, and use electrons instead of light. The higher the energy of the electrons, the smaller their wavelength. The best electron microscopes can see objects measured in angstroms, not just nanometers.

Less familiar?

A particle collider pushes this even further. The Large Hadron Collider accelerates protons until they have 6.5 Tera-electron-Volts of energy. That might be an unfamiliar type of unit, but if you’ve seen it before you can run the numbers, and estimate that this means the LHC can sees details below the attometer scale. That’s a quintillionth of a meter, or a hundred million times smaller than an atom.

A microscope isn’t just light, though, and a collider isn’t just high-energy protons. If it were, we could just wait and look at the sky. So-called cosmic rays are protons and other particles that travel to us from outer space. These can have very high energy: protons with similar energy to those in the LHC hit our atmosphere every day, and rays have been detected that were millions of times more powerful.

People sometimes ask why we can’t just use these cosmic rays to study particle physics. While we can certainly learn some things from cosmic rays, they have a big limitation. They have the “light” part of a microscope, but not the “lens”!

A microscope lens magnifies what you see. Starting from a tiny image, the lens blows it up until it’s big enough that you can see it with your own eyes. Particle colliders have similar technology, using their particle detectors. When two protons collider inside the LHC, they emit a flurry of other particles: photons and electrons, muons and mesons. Each of these particles is too small to see, let alone distinguish with the naked eye. But close to the collision there are detector machines that absorb these particles and magnify their signal. A single electron hitting one of these machines triggers a cascade of more and more electrons, in proportion to the energy of the electron that entered the machine. In the end, you get a strong electrical signal, which you can record with a computer. There are two big machines that do this at the Large Hadron Collider, each with its own independent scientific collaboration to run it. They’re called ATLAS and CMS.

The different layers of the CMS detector, magnifying signals from different types of particles.

So studying small scales needs two things: the right kind of “probe”, like light or protons, and a way to magnify the signal, like a lens or a particle detector. That’s hard to do without a big expensive machine…unless nature is unusually convenient. One interesting possibility is to try to learn about particle physics via astronomy. In the Big Bang particles collided with very high energy, and as the universe has expanded since then those details have been magnified across the sky. That kind of “cosmological collider” has the potential to teach us about physics at much smaller scales than any normal collider could reach. A downside is that, unlike in a collider, we can’t run the experiment over and over again: our “cosmological collider” only ran once. Still, if we want to learn about the very smallest scales, some day that may be our best option.

Alice Through the Parity Glass

When you look into your mirror in the morning, the face looking back at you isn’t exactly your own. Your mirror image is flipped: left-handed if you’re right-handed, and right-handed if you’re left-handed. Your body is not symmetric in the mirror: we say it does not respect parity symmetry. Zoom in, and many of the molecules in your body also have a “handedness” to them: biology is not the same when flipped in a mirror.

What about physics? At first, you might expect the laws of physics themselves to respect parity symmetry. Newton’s laws are the same when reflected in a mirror, and so are Maxwell’s. But one part of physics breaks this rule: the weak nuclear force, the force that causes nuclear beta decay. The weak nuclear force interacts differently with “right-handed” and “left-handed” particles (shorthand for particles that spin counterclockwise or clockwise with respect to their motion). This came as a surprise to most physicists, but it was predicted by Tsung-Dao Lee and Chen-Ning Yang and demonstrated in 1956 by Chien-Shiung Wu, known in her day as the “Queen of Nuclear Research”. The world really does look different when flipped in a mirror.

I gave a lecture on the weak force for the pedagogy course I took a few weeks back. One piece of feedback I got was that the topic wasn’t very relatable. People wanted to know why they should care about the handedness of the weak force, they wanted to hear about “real-life” applications. Once scientists learned that the weak force didn’t respect parity, what did that let us do?

Thinking about this, I realized this is actually a pretty tricky story to tell. With enough time and background, I could explain that the “handedness” of the Standard Model is a major constraint on attempts to unify physics, ruling out a lot of the simpler options. That’s hard to fit in a short lecture though, and it still isn’t especially close to “real life”.

Then I realized I don’t need to talk about “real life” to give a “real-life example”. People explaining relativity get away with science fiction scenarios, spaceships on voyages to black holes. The key isn’t to be familiar, just relatable. If I can tell a story (with people in it), then maybe I can make this work.

All I need, then, is a person who cares a lot about the world behind a mirror.

Curiouser and curiouser…

When Alice goes through the looking glass in the novel of that name, she enters a world flipped left-to-right, a world with its parity inverted. Following Alice, we have a natural opportunity to explore such a world. Others have used this to explore parity symmetry in biology: for example, a side-plot in Alan Moore’s League of Extraordinary Gentlemen sees Alice come back flipped, and starve when she can’t process mirror-reversed nutrients. I haven’t seen it explored for physics, though.

In order to make this story work, we have to get Alice to care about the weak nuclear force. The most familiar thing the weak force does is cause beta decay. And the most familiar thing that undergoes beta decay is a banana. Bananas contain radioactive potassium, which can transform to calcium by emitting an electron and an anti-electron-neutrino.

The radioactive potassium from a banana doesn’t stay in the body very long, only a few hours at most. But if Alice was especially paranoid about radioactivity, maybe she would want to avoid eating bananas. (We shouldn’t tell her that other foods contain potassium too.) If so, she might view the looking glass as a golden opportunity, a chance to eat as many bananas as she likes without worrying about radiation.

Does this work?

A first problem: can Alice even eat mirror-reversed bananas? I told you many biological molecules have handedness, which led Alan Moore’s version of Alice to starve. If we assume, unlike Moore, that Alice comes back in her original configuration and survives, we should still ask if she gets any benefit out of the bananas in the looking glass.

Researching this, I found that the main thing that makes bananas taste “banana-ish”, isoamyl acetate, does not have handedness: mirror bananas will still taste like bananas. Fructose, a sugar in bananas, does have handedness however: it isn’t the same when flipped in a mirror. Chatting with a chemist, the impression I got was that this isn’t a total loss: often, flipping a sugar results in another, different sugar. A mirror banana might still taste sweet, but less so. Overall, it may still be worth eating.

The next problem is a tougher one: flipping a potassium atom doesn’t actually make it immune to the weak force. The weak force only interacts with left-handed particles and right-handed antiparticles: in beta decay, it transforms a left-handed down quark to a left-handed up quark, producing a left-handed electron and a right-handed anti-neutrino.

Alice would have been fine if all of the quarks in potassium were left-handed, but they aren’t: an equal amount are right-handed, so the mirror weak force will still act on them, and they will still undergo beta decay. Actually, it’s worse than that: quarks, and massive particles in general, don’t actually have a definite handedness. If you speed up enough to catch up to a quark and pass it, then from your perspective it’s now going in the opposite direction, and its handedness is flipped. The only particles with definite handedness are massless particles: those go at the speed of light, so you can never catch up to them. Another way to think about this is that quarks get their mass from the Higgs field, and this happens because the Higgs lets left- and right-handed quarks interact. What we call the quark’s mass is in some sense just left- and right-handed quarks constantly mixing back and forth.

Alice does have the opportunity to do something interesting here, if she can somehow capture the anti-neutrinos from those bananas. Our world appears to only have left-handed neutrinos and right-handed anti-neutrinos. This seemed reasonable when we thought neutrinos were massless, but now we know neutrinos have a (very small) mass. As a result, the hunt is on for right-handed neutrinos or left-handed anti-neutrinos: if we can measure them, we could fix one of the lingering mysteries of the Standard Model. With this in mind, Alice has the potential to really confuse some particle physicists, giving them some left-handed anti-neutrinos from beyond the looking-glass.

It turns out there’s a problem with even this scheme, though. The problem is a much wider one: the whole story is physically inconsistent.

I’d been acting like Alice can pass back and forth through the mirror, carrying all her particles with her. But what are “her particles”? If she carries a banana through the mirror, you might imagine the quarks in the potassium atoms carry over. But those quarks are constantly exchanging other quarks and gluons, as part of the strong force holding them together. They’re also exchanging photons with electrons via the electromagnetic force, and they’re also exchanging W bosons via beta decay. In quantum field theory, all of this is in some sense happening at once, an infinite sum over all possible exchanges. It doesn’t make sense to just carve out one set of particles and plug them in to different fields somewhere else.

If we actually wanted to describe a mirror like Alice’s looking glass in physics, we’d want to do it consistently. This is similar to how physicists think of time travel: you can’t go back in time and murder your grandparents because your whole path in space-time has to stay consistent. You can only go back and do things you “already did”. We treat space in a similar way to time. A mirror like Alice’s imposes a condition, that fields on one side are equal to their mirror image on the other side. Conditions like these get used in string theory on occasion, and they have broad implications for physics on the whole of space-time, not just near the boundary. The upshot is that a world with a mirror like Alice’s in it would be totally different from a world without the looking glass: the weak force as we know it would not exist.

So unfortunately, I still don’t have a good “real life” story for a class about parity symmetry. It’s fun trying to follow Alice through a parity transformation, but there are a few too many problems for the tale to make any real sense. Feel free to suggest improvements!

Electromagnetism Is the Weirdest Force

For a long time, physicists only knew about two fundamental forces: electromagnetism, and gravity. Physics students follow the same path, studying Newtonian gravity, then E&M, and only later learning about the other fundamental forces. If you’ve just recently heard about the weak nuclear force and the strong nuclear force, it can be tempting to think of them as just slight tweaks on electromagnetism. But while that can be a helpful way to start, in a way it’s precisely backwards. Electromagnetism is simpler than the other forces, that’s true. But because of that simplicity, it’s actually pretty weird as a force.

The weirdness of electromagnetism boils down to one key reason: the electromagnetic field has no charge.

Maybe that sounds weird to you: if you’ve done anything with electromagnetism, you’ve certainly seen charges. But while you’ve calculated the field produced by a charge, the field itself has no charge. You can specify the positions of some electrons and not have to worry that the electric field will introduce new charges you didn’t plan. Mathematically, this means your equations are linear in the field, and thus not all that hard to solve.

The other forces are different. The strong nuclear force has three types of charge, dubbed red, green, and blue. Not just quarks, but the field itself has charges under this system, making the equations that describe it non-linear.

A depiction of a singlet state

Those properties mean that you can’t just think of the strong force as a push or pull between charges, like you could with electromagnetism. The strong force doesn’t just move quarks around, it can change their color, exchanging charge between the quark and the field. That’s one reason why when we’re more careful we refer to it as not the strong force, but the strong interaction.

The weak force also makes more sense when thought of as an interaction. It can change even more properties of particles, turning different flavors of quarks and leptons into each other, resulting in among other phenomena nuclear beta decay. It would be even more like the strong force, but the Higgs field screws that up, stirring together two more fundamental forces and spitting out the weak force and electromagnetism. The result ties them together in weird ways: for example, it means that the weak field can actually have an electric charge.

Interactions like the strong and weak forces are much more “normal” for particle physicists: if you ask us to picture a random fundamental force, chances are it will look like them. It won’t typically look like electromagnetism, the weird “degenerate” case with a field that doesn’t even have a charge. So despite how familiar electromagnetism may be to you, don’t take it as your model of what a fundamental force should look like: of all the forces, it’s the simplest and weirdest.

Doing Difficult Things Is Its Own Reward

Does antimatter fall up, or down?

Technically, we don’t know yet. The ALPHA-g experiment would have been the first to check this, making anti-hydrogen by trapping anti-protons and positrons in a long tube and seeing which way it falls. While they got most of their setup working, the LHC complex shut down before they could finish. It starts up again next month, so we should have our answer soon.

That said, for most theorists’ purposes, we absolutely do know: antimatter falls down. Antimatter is one of the cleanest examples of a prediction from pure theory that was confirmed by experiment. When Paul Dirac first tried to write down an equation that described electrons, he found the math forced him to add another particle with the opposite charge. With no such particle in sight, he speculated it could be the proton (this doesn’t work, they need the same mass), before Carl D. Anderson discovered the positron in 1932.

The same math that forced Dirac to add antimatter also tells us which way it falls. There’s a bit more involved, in the form of general relativity, but the recipe is pretty simple: we know how to take an equation like Dirac’s and add gravity to it, and we have enough practice doing it in different situations that we’re pretty sure it’s the right way to go. Pretty sure doesn’t mean 100% sure: talk to the right theorists, and you’ll probably find a proposal or two in which antimatter falls up instead of down. But they tend to be pretty weird proposals, from pretty weird theorists.

Ok, but if those theorists are that “weird”, that outside the mainstream, why does an experiment like ALPHA-g exist? Why does it happen at CERN, one of the flagship facilities for all of mainstream particle physics?

This gets at a misconception I occasionally hear from critics of the physics mainstream. They worry about groupthink among mainstream theorists, the physics community dismissing good ideas just because they’re not trendy (you may think I did that just now, for antigravity antimatter!) They expect this to result in a self-fulfilling prophecy where nobody tests ideas outside the mainstream, so they find no evidence for them, so they keep dismissing them.

The mistake of these critics is in assuming that what gets tested has anything to do with what theorists think is reasonable.

Theorists talk to experimentalists, sure. We motivate them, give them ideas and justification. But ultimately, people do experiments because they can do experiments. I watched a talk about the ALPHA experiment recently, and one thing that struck me was how so many different techniques play into it. They make antiprotons using a proton beam from the accelerator, slow them down with magnetic fields, and cool them with lasers. They trap their antihydrogen in an extremely precise vacuum, and confirm it’s there with particle detectors. The whole setup is a blend of cutting-edge accelerator physics and cutting-edge tricks for manipulating atoms. At its heart, ALPHA-g feels like its primary goal is to stress-test all of those tricks: to push the state of the art in a dozen experimental techniques in order to accomplish something remarkable.

And so even if the mainstream theorists don’t care, ALPHA will keep going. It will keep getting funding, it will keep getting visited by celebrities and inspiring pop fiction. Because enough people recognize that doing something difficult can be its own reward.

In my experience, this motivation applies to theorists too. Plenty of us will dismiss this or that proposal as unlikely or impossible. But give us a concrete calculation, something that lets us use one of our flashy theoretical techniques, and the tune changes. If we’re getting the chance to develop our tools, and get a paper out of it in the process, then sure, we’ll check your wacky claim. Why not?

I suspect critics of the mainstream would have a lot more success with this kind of pitch-based approach. If you can find a theorist who already has the right method, who’s developing and extending it and looking for interesting applications, then make your pitch: tell them how they can answer your question just by doing what they do best. They’ll think of it as a chance to disprove you, and you should let them, that’s the right attitude to take as a scientist anyway. It’ll work a lot better than accusing them of hogging the grant money.