Tag Archives: quantum field theory

The arXiv SciComm Challenge

Fellow science communicators, think you can explain everything that goes on in your field? If so, I have a challenge for you. Pick a day, and go through all the new papers on arXiv.org in a single area. For each one, try to give a general-audience explanation of what the paper is about. To make it easier, you can ignore cross-listed papers. If your field doesn’t use arXiv, consider if you can do the challenge with another appropriate site.

I’ll start. I’m looking at papers in the “High Energy Physics – Theory” area, announced 6 Jan, 2022. I’ll warn you in advance that I haven’t read these papers, just their abstracts, so apologies if I get your paper wrong!

arXiv:2201.01303 : Holographic State Complexity from Group Cohomology

This paper says it is a contribution to a Proceedings. That means it is based on a talk given at a conference. In my field, a talk like this usually won’t be presenting new results, but instead summarizes results in a previous paper. So keep that in mind.

There is an idea in physics called holography, where two theories are secretly the same even though they describe the world with different numbers of dimensions. Usually this involves a gravitational theory in a “box”, and a theory without gravity that describes the sides of the box. The sides turn out to fully describe the inside of the box, much like a hologram looks 3D but can be printed on a flat sheet of paper. Using this idea, physicists have connected some properties of gravity to properties of the theory on the sides of the box. One of those properties is complexity: the complexity of the theory on the sides of the box says something about gravity inside the box, in particular about the size of wormholes. The trouble is, “complexity” is a bit subjective: it’s not clear how to give a good definition for it for this type of theory. In this paper, the author studies a theory with a precise mathematical definition, called a topological theory. This theory turns out to have mathematical properties that suggest a well-defined notion of complexity for it.

arXiv:2201.01393 : Nonrelativistic effective field theories with enhanced symmetries and soft behavior

We sometimes describe quantum field theory as quantum mechanics plus relativity. That’s not quite true though, because it is possible to define a quantum field theory that doesn’t obey special relativity, a non-relativistic theory. Physicists do this if they want to describe a system moving much slower than the speed of light: it gets used sometimes for nuclear physics, and sometimes for modeling colliding black holes.

In particle physics, a “soft” particle is one with almost no momentum. We can classify theories based on how they behave when a particle becomes more and more soft. In normal quantum field theories, if they have special behavior when a particle becomes soft it’s often due to a symmetry of the theory, where the theory looks the same even if something changes. This paper shows that this is not true for non-relativistic theories: they have more requirements to have special soft behavior, not just symmetry. They “bootstrap” a few theories, using some general restrictions to find them without first knowing how they work (“pulling them up by their own bootstraps”), and show that the theories they find are in a certain sense unique, the only theories of that kind.

arXiv:2201.01552 : Transmutation operators and expansions for 1-loop Feynman integrands

In recent years, physicists in my sub-field have found new ways to calculate the probability that particles collide. One of these methods describes ordinary particles in a way resembling string theory, and from this discovered a whole “web” of theories that were linked together by small modifications of the method. This method originally worked only for the simplest Feynman diagrams, the “tree” diagrams that correspond to classical physics, but was extended to the next-simplest diagrams, diagrams with one “loop” that start incorporating quantum effects.

This paper concerns a particular spinoff of this method, that can find relationships between certain one-loop calculations in a particularly efficient way. It lets you express calculations of particle collisions in a variety of theories in terms of collisions in a very simple theory. Unlike the original method, it doesn’t rely on any particular picture of how these collisions work, either Feynman diagrams or strings.

arXiv:2201.01624 : Moduli and Hidden Matter in Heterotic M-Theory with an Anomalous U(1) Hidden Sector

In string theory (and its more sophisticated cousin M theory), our four-dimensional world is described as a world with more dimensions, where the extra dimensions are twisted up so that they cannot be detected. The shape of the extra dimensions influences the kinds of particles we can observe in our world. That shape is described by variables called “moduli”. If those moduli are stable, then the properties of particles we observe would be fixed, otherwise they would not be. In general it is a challenge in string theory to stabilize these moduli and get a world like what we observe.

This paper discusses shapes that give rise to a “hidden sector”, a set of particles that are disconnected from the particles we know so that they are hard to observe. Such particles are often proposed as a possible explanation for dark matter. This paper calculates, for a particular kind of shape, what the masses of different particles are, as well as how different kinds of particles can decay into each other. For example, a particle that causes inflation (the accelerating expansion of the universe) can decay into effects on the moduli and dark matter. The paper also shows how some of the moduli are made stable in this picture.

arXiv:2201.01630 : Chaos in Celestial CFT

One variant of the holography idea I mentioned earlier is called “celestial” holography. In this picture, the sides of the box are an infinite distance away: a “celestial sphere” depicting the angles particles go after they collide, in the same way a star chart depicts the angles between stars. Recent work has shown that there is something like a sensible theory that describes physics on this celestial sphere, that contains all the information about what happens inside.

This paper shows that the celestial theory has a property called quantum chaos. In physics, a theory is said to be chaotic if it depends very precisely on its initial conditions, so that even a small change will result in a large change later (the usual metaphor is a butterfly flapping its wings and causing a hurricane). This kind of behavior appears to be present in this theory.

arXiv:2201.01657 : Calculations of Delbrück scattering to all orders in αZ

Delbrück scattering is an effect where the nuclei of heavy elements like lead can deflect high-energy photons, as a consequence of quantum field theory. This effect is apparently tricky to calculate, and previous calculations have involved approximations. This paper finds a way to calculate the effect without those approximations, which should let it match better with experiments.

(As an aside, I’m a little confused by the claim that they’re going to all orders in αZ when it looks like they just consider one-loop diagrams…but this is probably just my ignorance, this is a corner of the field quite distant from my own.)

arXiv:2201.01674 : On Unfolded Approach To Off-Shell Supersymmetric Models

Supersymmetry is a relationship between two types of particles: fermions, which typically make up matter, and bosons, which are usually associated with forces. In realistic theories this relationship is “broken” and the two types of particles have different properties, but theoretical physicists often study models where supersymmetry is “unbroken” and the two types of particles have the same mass and charge. This paper finds a new way of describing some theories of this kind that reorganizes them in an interesting way, using an “unfolded” approach in which aspects of the particles that would normally be combined are given their own separate variables.

(This is another one I don’t know much about, this is the first time I’d heard of the unfolded approach.)

arXiv:2201.01679 : Geometric Flow of Bubbles

String theorists have conjectured that only some types of theories can be consistently combined with a full theory of quantum gravity, others live in a “swampland” of non-viable theories. One set of conjectures characterizes this swampland in terms of “flows” in which theories with different geometry can flow in to each other. The properties of these flows are supposed to be related to which theories are or are not in the swampland.

This paper writes down equations describing these flows, and applies them to some toy model “bubble” universes.

arXiv:2201.01697 : Graviton scattering amplitudes in first quantisation

This paper is a pedagogical one, introducing graduate students to a topic rather than presenting new research.

Usually in quantum field theory we do something called “second quantization”, thinking about the world not in terms of particles but in terms of fields that fill all of space and time. However, sometimes one can instead use “first quantization”, which is much more similar to ordinary quantum mechanics. There you think of a single particle traveling along a “world-line”, and calculate the probability it interacts with other particles in particular ways. This approach has recently been used to calculate interactions of gravitons, particles related to the gravitational field in the same way photons are related to the electromagnetic field. The approach has some advantages in terms of simplifying the results, which are described in this paper.

Classicality Has Consequences

Last week, I mentioned some interesting new results in my corner of physics. I’ve now finally read the two papers and watched the recorded talk, so I can satisfy my frustrated commenters.

Quantum mechanics is a very cool topic and I am much less qualified than you would expect to talk about it. I use quantum field theory, which is based on quantum mechanics, so in some sense I use quantum mechanics every day. However, most of the “cool” implications of quantum mechanics don’t come up in my work. All the debates about whether measurement “collapses the wavefunction” are irrelevant when the particles you measure get absorbed in a particle detector, never to be seen again. And while there are deep questions about how a classical world emerges from quantum probabilities, they don’t matter so much when all you do is calculate those probabilities.

They’ve started to matter, though. That’s because quantum field theorists like me have recently started working on a very different kind of problem: trying to predict the output of gravitational wave telescopes like LIGO. It turns out you can do almost the same kind of calculation we’re used to: pretend two black holes or neutron stars are sub-atomic particles, and see what happens when they collide. This trick has grown into a sub-field in its own right, one I’ve dabbled in a bit myself. And it’s gotten my kind of physicists to pay more attention to the boundary between classical and quantum physics.

The thing is, the waves that LIGO sees really are classical. Any quantum gravity effects there are tiny, undetectably tiny. And while this doesn’t have the implications an expert might expect (we still need loop diagrams), it does mean that we need to take our calculations to a classical limit.

Figuring out how to do this has been surprisingly delicate, and full of unexpected insight. A recent example involves two papers, one by Andrea Cristofoli, Riccardo Gonzo, Nathan Moynihan, Donal O’Connell, Alasdair Ross, Matteo Sergola, and Chris White, and one by Ruth Britto, Riccardo Gonzo, and Guy Jehu. At first I thought these were two groups happening on the same idea, but then I noticed Riccardo Gonzo on both lists, and realized the papers were covering different aspects of a shared story. There is another group who happened upon the same story: Paolo Di Vecchia, Carlo Heissenberg, Rodolfo Russo and Gabriele Veneziano. They haven’t published yet, so I’m basing this on the Gonzo et al papers.

The key question each group asked was, what does it take for gravitational waves to be classical? One way to ask the question is to pick something you can observe, like the strength of the field, and calculate its uncertainty. Classical physics is deterministic: if you know the initial conditions exactly, you know the final conditions exactly. Quantum physics is not. What should happen is that if you calculate a quantum uncertainty and then take the classical limit, that uncertainty should vanish: the observation should become certain.

Another way to ask is to think about the wave as made up of gravitons, particles of gravity. Then you can ask how many gravitons are in the wave, and how they are distributed. It turns out that you expect them to be in a coherent state, like a laser, one with a very specific distribution called a Poisson distribution: a distribution in some sense right at the border between classical and quantum physics.

The results of both types of questions were as expected: the gravitational waves are indeed classical. To make this work, though, the quantum field theory calculation needs to have some surprising properties.

If two black holes collide and emit a gravitational wave, you could depict it like this:

All pictures from arXiv:2112.07556

where the straight lines are black holes, and the squiggly line is a graviton. But since gravitational waves are made up of multiple gravitons, you might ask, why not depict it with two gravitons, like this?

It turns out that diagrams like that are a problem: they mean your two gravitons are correlated, which is not allowed in a Poisson distribution. In the uncertainty picture, they also would give you non-zero uncertainty. Somehow, in the classical limit, diagrams like that need to go away.

And at first, it didn’t look like they do. You can try to count how many powers of Planck’s constant show up in each diagram. The authors do that, and it certainly doesn’t look like it goes away:

An example from the paper with Planck’s constants sprinkled around

Luckily, these quantum field theory calculations have a knack for surprising us. Calculate each individual diagram, and things look hopeless. But add them all together, and they miraculously cancel. In the classical limit, everything combines to give a classical result.

You can do this same trick for diagrams with more graviton particles, as many as you like, and each time it ought to keep working. You get an infinite set of relationships between different diagrams, relationships that have to hold to get sensible classical physics. From thinking about how the quantum and classical are related, you’ve learned something about calculations in quantum field theory.

That’s why these papers caught my eye. A chunk of my sub-field is needing to learn more and more about the relationship between quantum and classical physics, and it may have implications for the rest of us too. In the future, I might get a bit more qualified to talk about some of the very cool implications of quantum mechanics.

Discovering New Elements, Discovering New Particles

In school, you learn that the world around you is made up of chemical elements. There’s oxygen and nitrogen in the air, hydrogen and oxygen in water, sodium and chlorine in salt, and carbon in all living things. Other elements are more rare. Often, that’s because they’re unstable, due to radioactivity, like the plutonium in a bomb or americium in a smoke detector. The heaviest elements are artificial, produced in tiny amounts by massive experiments. In 2002, the heaviest element yet was found at the Joint Institute for Nuclear Research near Moscow. It was later named Oganesson, after the scientist who figured out how to make these heavy elements, Yuri Oganessian. To keep track of the different elements, we organize them in the periodic table like this:

In that same school, you probably also learn that the elements aren’t quite so elementary. Unlike the atoms imagined by the ancient Greeks, our atoms are made of smaller parts: protons and neutrons, surrounded by a cloud of electrons. They’re what give the periodic table its periodic structure, the way it repeats from row to row, with each different element having a different number of protons.

If your school is a bit more daring, you also learn that protons and neutrons themselves aren’t elementary. Each one is made of smaller particles called quarks: a proton of two “up quarks” and one “down quark”, and a neutron of two “down” and one “up”. Up quarks, down quarks, and electrons are all what physicists call fundamental particles, and they make up everything you see around you. Just like the chemical elements, some fundamental particles are more obscure than others, and the heaviest ones are all very unstable, produced temporarily by particle collider experiments. The most recent particle to be discovered was in 2012, when the Large Hadron Collider near Geneva found the Higgs boson. The Higgs boson is named after Peter Higgs, one of those who predicted it back in the 60’s. All the fundamental particles we know are part of something called the Standard Model, which we sometimes organize in a table like this:

So far, these stories probably sound similar. The experiments might not even sound that different: the Moscow experiment shoots a beam of high-energy calcium atoms at a target of heavy radioactive elements, while the Geneva one shoots a beam of high-energy protons at another beam of high-energy protons. With all those high-energy beams, what’s the difference really?

In fact there is a big different between chemical elements and fundamental particles, and between the periodic table and the Standard Model. The latter are fundamental, the former are not.

When they made new chemical elements, scientists needed to start with a lot of protons and neutrons. That’s why they used calcium atoms in their beam, and even heavier elements as their target. We know that heavy elements are heavy because they contain more protons and neutrons, and we can use the arrangement of those protons and neutrons to try to predict their properties. That’s why, even though only five or six oganesson atoms have been detected, scientists have some idea what kind of material it would make. Oganesson is a noble gas, like helium, neon, and radon. But calculations predict it is actually a solid at room temperature. What’s more, it’s expected to be able to react with other elements, something the other noble gases are very reluctant to do.

The Standard Model has patterns, just like the chemical elements. Each matter particle is one of three “generations”, each heavier and more unstable: for example, electrons have heavier relatives called muons, and still heavier ones called tauons. But unlike with the elements, we don’t know where these patterns come from. We can’t explain them with smaller particles, like we could explain the elements with protons and neutrons. We think the Standard Model particles might actually be fundamental, not made of anything smaller.

That’s why when we make them, we don’t need a lot of other particles: just two protons, each made of three quarks, is enough. With that, we can make not just new arrangements of quarks, but new particles altogether. Some are even heavier than the protons we started with: the Higgs boson is more than a hundred times as heavy as a proton! We can do this because, in particle physics, mass isn’t conserved: mass is just another type of energy, and you can turn one type of energy into another.

Discovering new elements is hard work, but discovering new particles is on another level. It’s hard to calculate which elements are stable or unstable, and what their properties might be. But we know the rules, and with enough skill and time we could figure it out. In particle physics, we don’t know the rules. We have some good guesses, simple models to solve specific problems, and sometimes, like with the Higgs, we’re right. But despite making many more than five or six Higgs bosons, we still aren’t sure it has the properties we expect. We don’t know the rules. Even with skill and time, we can’t just calculate what to expect. We have to discover it.

Searching for Stefano

On Monday, Quanta magazine released an article on a man who transformed the way we do particle physics: Stefano Laporta. I’d tipped them off that Laporta would make a good story: someone who came up with the bread-and-butter algorithm that fuels all of our computations, then vanished from the field for ten years, returning at the end with an 1,100 digit masterpiece. There’s a resemblance to Searching for Sugar Man, fans and supporters baffled that their hero is living in obscurity.

If anything, I worry I under-sold the story. When Quanta interviewed me, it was clear they were looking for ties to well-known particle physics results: was Laporta’s work necessary for the Higgs boson discovery, or linked to the controversy over the magnetic moment of the muon? I was careful, perhaps too careful, in answering. The Higgs, to my understanding, didn’t require so much precision for its discovery. As for the muon, the controversial part is a kind of calculation that wouldn’t use Laporta’s methods, while the un-controversial part was found numerically by a group that doesn’t use his algorithm either.

With more time now, I can make a stronger case. I can trace Laporta’s impact, show who uses his work and for what.

In particle physics, we have a lovely database called INSPIRE that lists all our papers. Here is Laporta’s page, his work sorted by number of citations. When I look today, I find his most cited paper, the one that first presented his algorithm, at the top, with a delightfully apt 1,001 citations. Let’s listen to a few of those 1,001 tales, and see what they tell us.

Once again, we’ll sort by citations. The top paper, “Higgs boson production at hadron colliders in NNLO QCD“, is from 2002. It computes the chance that a particle collider like the LHC could produce a Higgs boson. It in turn has over a thousand citations, headlined by two from the ATLAS and CMS collaborations: “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC” and “Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC“. Those are the papers that announced the discovery of the Higgs, each with more than twelve thousand citations. Later in the list, there are design reports: discussions of why the collider experiments are built a certain way. So while it’s true that the Higgs boson could be seen clearly from the data, Laporta’s work still had a crucial role: with his algorithm, we could reassure experimenters that they really found the Higgs (not something else), and even more importantly, help them design the experiment so that they could detect it.

The next paper tells a similar story. A different calculation, with almost as many citations, feeding again into planning and prediction for collider physics.

The next few touch on my own corner of the field. “New Relations for Gauge-Theory Amplitudes” triggered a major research topic in its own right, one with its own conference series. Meanwhile, “Iteration of planar amplitudes in maximally supersymmetric Yang-Mills theory at three loops and beyond” served as a foundation for my own career, among many others. None of this would have happened without Laporta’s algorithm.

After that, more applications: fundamental quantities for collider physics, pieces of math that are used again and again. In particular, they are referenced again and again by the Particle Data Group, who collect everything we know about particle physics.

Further down still, and we get to specific code: FIRE and Reduze, programs made by others to implement Laporta’s algorithm, each with many uses in its own right.

All that, just from one of Laporta’s papers.

His ten-year magnum opus is more recent, and has fewer citations: checking now, just 139. Still, there are stories to tell there too.

I mentioned earlier 1,100 digits, and this might confuse some of you. The most precise prediction in particle physics has ten digits of precision, the magnetic behavior of the electron. Laporta’s calculation didn’t change that, because what he calculated isn’t the only contribution: he calculated Feynman diagrams with four “loops”, which is its own approximation, one limited in precision by what might be contributed by more loops. The current result has Feynman diagrams with five loops as well (known to much less than 1,100 digits), but the diagrams with six or more are unknown, and can only be estimated. The result also depends on measurements, which themselves can’t reach 1,100 digits of precision.

So why would you want 1,100 digits, then? In a word, mathematics. The calculation involves exotic types of numbers called periods, more complicated cousins of numbers like pi. These numbers are related to each other, often in complicated and surprising ways, ways which are hard to verify without such extreme precision. An older result of Laporta’s inspired the physicist David Broadhurst and mathematician Anton Mellit to conjecture new relations between this type of numbers, relations that were only later proven using cutting-edge mathematics. The new result has inspired mathematicians too: Oliver Schnetz found hints of a kind of “numerical footprint”, special types of numbers tied to the physics of electrons. It’s a topic I’ve investigated myself, something I think could lead to much more efficient particle physics calculations.

In addition to being inspired by Laporta’s work, Broadhurst has advocated for it. He was the one who first brought my attention to Laporta’s story, with a moving description of welcoming him back to the community after his ten-year silence, writing a letter to help him get funding. I don’t have all the details of the situation, but the impression I get is that Laporta had virtually no academic support for those ten years: no salary, no students, having to ask friends elsewhere for access to computer clusters.

When I ask why someone with such an impact didn’t have a professorship, the answer I keep hearing is that he didn’t want to move away from his home town in Bologna. If you aren’t an academic, that won’t sound like much of an explanation: Bologna has a university after all, the oldest in the world. But that isn’t actually a guarantee of anything. Universities hire rarely, according to their own mysterious agenda. I remember another colleague whose wife worked for a big company. They offered her positions in several cities, including New York. They told her that, since New York has many universities, surely her husband could find a job at one of them? We all had a sad chuckle at that.

For almost any profession, a contribution like Laporta’s would let you live anywhere you wanted. That’s not true for academia, and it’s to our loss. By demanding that each scientist be able to pick up and move, we’re cutting talented people out of the field, filtering by traits that have nothing to do with our contributions to knowledge. I don’t know Laporta’s full story. But I do know that doing the work you love in the town you love isn’t some kind of unreasonable request. It’s a request academia should be better at fulfilling.

In Uppsala for Elliptics 2021

I’m in Uppsala in Sweden this week, at an actual in-person conference.

With actual blackboards!

Elliptics started out as a series of small meetings of physicists trying to understand how to make sense of elliptic integrals in calculations of colliding particles. It grew into a full-fledged yearly conference series. I organized last year, which naturally was an online conference. This year though, the stage was set for Uppsala University to host in person.

I should say mostly in person. It’s a hybrid conference, with some speakers and attendees joining on Zoom. Some couldn’t make it because of travel restrictions, or just wanted to be cautious about COVID. But seemingly just as many had other reasons, like teaching schedules or just long distances, that kept them from coming in person. We’re all wondering if this will become a long-term trend, where the flexibility of hybrid conferences lets people attend no matter their constraints.

The hybrid format worked better than expected, but there were still a few kinks. The audio was particularly tricky, it seemed like each day the organizers needed a new microphone setup to take questions. It’s always a little harder to understand someone on Zoom, especially when you’re sitting in an auditorium rather than focused on your own screen. Still, technological experience should make this work better in future.

Content-wise, the conference began with a “mini-school” of pedagogical talks on particle physics, string theory, and mathematics. I found the mathematical talks by Erik Panzer particularly nice, it’s a topic I still feel quite weak on and he laid everything out in a very clear way. It seemed like a nice touch to include a “school” element in the conference, though I worry it ate too much into the time.

The rest of the content skewed more mathematical, and more string-theoretic, than these conferences have in the past. The mathematical content ranged from intriguing (including an interesting window into what it takes to get high-quality numerics) to intimidatingly obscure (large commutative diagrams, category theory on the first slide). String theory was arguably under-covered in prior years, but it felt over-covered this year. With the particle physics talks focusing on either general properties with perhaps some connections to elliptics, or to N=4 super Yang-Mills, it felt like we were missing the more “practical” talks from past conferences, where someone was computing something concrete in QCD and told us what they needed. Next year is in Mainz, so maybe those talks will reappear.

Congratulations to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi!

The 2021 Nobel Prize in Physics was announced this week, awarded to Syukuro Manabe and Klaus Hasselmann for climate modeling and Giorgio Parisi for understanding a variety of complex physical systems.

Before this year’s prize was announced, I remember a few “water cooler chats” about who might win. No guess came close, though. The Nobel committee seems to have settled in to a strategy of prizes on a loosely linked “basket” of topics, with half the prize going to a prominent theorist and the other half going to two experimental, observational, or (in this case) computational physicists. It’s still unclear why they’re doing this, but regardless it makes it hard to predict what they’ll do next!

When I read the announcement, my first reaction was, “surely it’s not that Parisi?” Giorgio Parisi is known in my field for the Altarelli-Parisi equations (more properly known as the DGLAP equations, the longer acronym because, as is often the case in physics, the Soviets got there first). These equations are in some sense why the scattering amplitudes I study are ever useful at all. I calculate collisions of individual fundamental particles, like quarks and gluons, but a real particle collider like the LHC collides protons. Protons are messy, interacting combinations of quarks and gluons. When they collide you need not merely the equations describing colliding quarks and gluons, but those that describe their messy dynamics inside the proton, and in particular how those dynamics look different for experiments with different energies. The equation that describes that is the DGLAP equation.

As it turns out, Parisi is known for a lot more than the DGLAP equation. He is best known for his work on “spin glasses”, models of materials where quantum spins try to line up with each other, never quite settling down. He also worked on a variety of other complex systems, including flocks of birds!

I don’t know as much about Manabe and Hasselmann’s work. I’ve only seen a few talks on the details of climate modeling. I’ve seen plenty of talks on other types of computer modeling, though, from people who model stars, galaxies, or black holes. And from those, I can appreciate what Manabe and Hasselmann did. Based on those talks, I recognize the importance of those first one-dimensional models, a single column of air, especially back in the 60’s when computer power was limited. Even more, I recognize how impressive it is for someone to stay on the forefront of that kind of field, upgrading models for forty years to stay relevant into the 2000’s, as Manabe did. Those talks also taught me about the challenge of coupling different scales: how small effects in churning fluids can add up and affect the simulation, and how hard it is to model different scales at once. To use these effects to discover which models are reliable, as Hasselmann did, is a major accomplishment.

Stop Listing the Amplituhedron as a Competitor of String Theory

The Economist recently had an article (paywalled) that meandered through various developments in high-energy physics. It started out talking about the failure of the LHC to find SUSY, argued this looked bad for string theory (which…not really?) and used it as a jumping-off point to talk about various non-string “theories of everything”. Peter Woit quoted it a few posts back as kind of a bellwether for public opinion on supersymmetry and string theory.

The article was a muddle, but a fairly conventional muddle, explaining or mis-explaining things in roughly the same way as other popular physics pieces. For the most part that didn’t bug me, but one piece of the muddle hit a bit close to home:

The names of many of these [non-string theories of everything] do, it must be conceded, torture the English language. They include “causal dynamical triangulation”, “asymptotically safe gravity”, “loop quantum gravity” and the “amplituhedron formulation of quantum theory”.

I’ve posted about the amplituhedron more than a few times here on this blog. Out of every achievement of my sub-field, it has most captured the public imagination. It’s legitimately impressive, a way to translate calculations of probabilities of collisions of fundamental particles (in a toy model, to be clear) into geometrical objects. What it isn’t, and doesn’t pretend to be, is a theory of everything.

To be fair, the Economist piece admits this:

Most attempts at a theory of everything try to fit gravity, which Einstein describes geometrically, into quantum theory, which does not rely on geometry in this way. The amplituhedron approach does the opposite, by suggesting that quantum theory is actually deeply geometric after all. Better yet, the amplituhedron is not founded on notions of spacetime, or even statistical mechanics. Instead, these ideas emerge naturally from it. So, while the amplituhedron approach does not as yet offer a full theory of quantum gravity, it has opened up an intriguing path that may lead to one.

The reasoning they have leading up to it has a few misunderstandings anyway. The amplituhedron is geometrical, but in a completely different way from how Einstein’s theory of gravity is geometrical: Einstein’s gravity is a theory of space and time, the amplituhedron’s magic is that it hides space and time behind a seemingly more fundamental mathematics.

This is not to say that the amplituhedron won’t lead to insights about gravity. That’s a big part of what it’s for, in the long-term. Because the amplituhedron hides the role of space and time, it might show the way to theories that lack them altogether, theories where space and time are just an approximation for a more fundamental reality. That’s a real possibility, though not at this point a reality.

Even if you take this possibility completely seriously, though, there’s another problem with the Economist’s description: it’s not clear that this new theory would be a non-string theory!

The main people behind the amplituhedron are pretty positively disposed to string theory. If you asked them, I think they’d tell you that, rather than replacing string theory, they expect to learn more about string theory: to see how it could be reformulated in a way that yields insight about trickier problems. That’s not at all like the other “non-string theories of everything” in that list, which frame themselves as alternatives to, or even opponents of, string theory.

It is a lot like several other research programs, though, like ER=EPR and It from Qubit. Researchers in those programs try to use physical principles and toy models to say fundamental things about quantum gravity, trying to think about space and time as being made up of entangled quantum objects. By that logic, they belong in that list in the article alongside the amplituhedron. The reason they aren’t is obvious if you know where they come from: ER=EPR and It from Qubit are worked on by string theorists, including some of the most prominent ones.

The thing is, any reason to put the amplituhedron on that list is also a reason to put them. The amplituhedron is not a theory of everything, it is not at present a theory of quantum gravity. It’s a research direction that might shed new insight about quantum gravity. It doesn’t explicitly involve strings, but neither does It from Qubit most of the time. Unless you’re going to describe It from Qubit as a “non-string theory of everything”, you really shouldn’t describe the amplituhedron as one.

The amplituhedron is a really cool idea, one with great potential. It’s not something like loop quantum gravity, or causal dynamical triangulations, and it doesn’t need to be. Let it be what it is, please!

Of Cows and Razors

Last week’s post came up on Reddit, where a commenter made a good point. I said that one of the mysteries of neutrinos is that they might not get their mass from the Higgs boson. This is true, but the commenter rightly points out it’s true of other particles too: electrons might not get their mass from the Higgs. We aren’t sure. The lighter quarks might not get their mass from the Higgs either.

When talking physics with the public, we usually say that electrons and quarks all get their mass from the Higgs. That’s how it works in our Standard Model, after all. But even though we’ve found the Higgs boson, we can’t be 100% sure that it functions the way our model says. That’s because there are aspects of the Higgs we haven’t been able to measure directly. We’ve measured how it affects the heaviest quark, the top quark, but measuring its interactions with other particles will require a bigger collider. Until we have those measurements, the possibility remains open that electrons and quarks get their mass another way. It would be a more complicated way: we know the Higgs does a lot of what the model says, so if it deviates in another way we’d have to add more details, maybe even more undiscovered particles. But it’s possible.

If I wanted to defend the idea that neutrinos are special here, I would point out that neutrino masses, unlike electron masses, are not part of the Standard Model. For electrons, we have a clear “default” way for them to get mass, and that default is in a meaningful way simpler than the alternatives. For neutrinos, every alternative is complicated in some fashion: either adding undiscovered particles, or unusual properties. If we were to invoke Occam’s Razor, the principle that we should always choose the simplest explanation, then for electrons and quarks there is a clear winner. Not so for neutrinos.

I’m not actually going to make this argument. That’s because I’m a bit wary of using Occam’s Razor when it comes to questions of fundamental physics. Occam’s Razor is a good principle to use, if you have a good idea of what’s “normal”. In physics, you don’t.

To illustrate, I’ll tell an old joke about cows and trains. Here’s the version from The Curious Incident of the Dog in the Night-Time:

There are three men on a train. One of them is an economist and one of them is a logician and one of them is a mathematician. And they have just crossed the border into Scotland (I don’t know why they are going to Scotland) and they see a brown cow standing in a field from the window of the train (and the cow is standing parallel to the train). And the economist says, ‘Look, the cows in Scotland are brown.’ And the logician says, ‘No. There are cows in Scotland of which at least one is brown.’ And the mathematician says, ‘No. There is at least one cow in Scotland, of which one side appears to be brown.’

One side of this cow appears to be very fluffy.

If we want to be as careful as possible, the mathematician’s answer is best. But we expect not to have to be so careful. Maybe the economist’s answer, that Scottish cows are brown, is too broad. But we could imagine an agronomist who states “There is a breed of cows in Scotland that is brown”. And I suggest we should find that pretty reasonable. Essentially, we’re using Occam’s Razor: if we want to explain seeing a brown half-cow from a train, the simplest explanation would be that it’s a member of a breed of cows that are brown. It would be less simple if the cow were unique, a brown mutant in a breed of black and white cows. It would be even less simple if only one side of the cow were brown, and the other were another color.

When we use Occam’s Razor in this way, we’re drawing from our experience of cows. Most of the cows we meet are members of some breed or other, with similar characteristics. We don’t meet many mutant cows, or half-colored cows, so we think of those options as less simple, and less likely.

But what kind of experience tells us which option is simpler for electrons, or neutrinos?

The Standard Model is a type of theory called a Quantum Field Theory. We have experience with other Quantum Field Theories: we use them to describe materials, metals and fluids and so forth. Still, it seems a bit odd to say that if something is typical of these materials, it should also be typical of the universe. As another physicists in my sub-field, Nima Arkani-Hamed, likes to say, “the universe is not a crappy metal!”

We could also draw on our experience from other theories in physics. This is a bit more productive, but has other problems. Our other theories are invariably incomplete, that’s why we come up with new theories in the first place…and with so few theories, compared to breeds of cows, it’s unclear that we really have a good basis for experience.

Physicists like to brag that we study the most fundamental laws of nature. Ordinarily, this doesn’t matter as much as we pretend: there’s a lot to discover in the rest of science too, after all. But here, it really makes a difference. Unlike other fields, we don’t know what’s “normal”, so we can’t really tell which theories are “simpler” than others. We can make aesthetic judgements, on the simplicity of the math or the number of fields or the quality of the stories we can tell. If we want to be principled and forego all of that, then we’re left on an abyss, a world of bare observations and parameter soup.

If a physicist looks out a train window, will they say that all the electrons they see get their mass from the Higgs? Maybe, still. But they should be careful about it.

Lessons From Neutrinos, Part II

Last week I talked about the history of neutrinos. Neutrinos come in three types, or “flavors”. Electron neutrinos are the easiest: they’re produced alongside electrons and positrons in the different types of beta decay. Electrons have more massive cousins, called muon and tau particles. As it turns out, each of these cousins has a corresponding flavor of neutrino: muon neutrinos, and tau neutrinos.

For quite some time, physicists thought that all of these neutrinos had zero mass.

(If the idea of a particle with zero mass confuses you, think about photons. A particle with zero mass travels, like a photon, at the speed of light. This doesn’t make them immune to gravity: just as no light can escape a black hole, neither can any other massless particle. It turns out that once you take into account Einstein’s general theory of relativity, gravity cares about energy, not just mass.)

Eventually, physicists started to realize they were wrong, and neutrinos had a small non-zero mass after all. Their reason why might seem a bit strange, though. Physicists didn’t weigh the neutrinos, or measure their speed. Instead, they observed that different flavors of neutrinos transform into each other. We say that they oscillate: electron neutrinos oscillate into muon or tau neutrinos, which oscillate into the other flavors, and so on. Over time, a beam of electron neutrinos will become a beam of mostly tau and muon neutrinos, before becoming a beam of electron neutrinos again.

That might not sound like it has much to do with mass. To understand why it does, you’ll need to learn this post’s lesson:

Lesson 2: Mass is just How Particles Move

Oscillating particles seem like a weird sort of evidence for mass. What would be a more normal kind of evidence?

Those of you who’ve taken physics classes might remember the equation F=ma. Apply a known force to something, see how much it accelerates, and you can calculate its mass. If you’ve had a bit more physics, you’ll know that this isn’t quite the right equation to use for particles close to the speed of light, but that there are other equations we can use in a similar way. In particular, using relativity, we have E^2=p^2 c^2 + m^2 c^4. (At rest, p=0, and we have the famous E=mc^2). This lets us do the same kind of thing: give something a kick and see how it moves.

So let’s say we do that: we give a particle a kick, and measure it later. I’ll visualize this with a tool physicists use called a Feynman diagram. The line represents a particle traveling from one side to the other, from “kick” to “measurement”:

Because we only measure the particle at the end, we might miss if something happens in between. For example, it might interact with another particle or field, like this:

If we don’t know about this other field, then when we try to measure the particle’s mass we will include interactions like this. As it turns out, this is how the Higgs boson works: the Higgs field interacts with particles like electrons and quarks, changing how they move, so that they appear to have mass.

Quantum particles can do other things too. You might have heard people talk about one particle turning into a pair of temporary “virtual particles”. When people say that, they usually have a diagram in mind like this:

In particle physics, we need to take into account every diagram of this kind, every possible thing that could happen in between “kick” and measurement. The final result isn’t one path or another, but a sum of all the different things that could have happened in between. So when we measure the mass of a particle, we’re including every diagram that’s allowed: everything that starts with our “kick” and ends with our measurement.

Now what if our particle can transform, from one flavor to another?

Now we have a new type of thing that can happen in between “kick” and measurement. And if it can happen once, it can happen more than once:

Remember that, when we measure mass, we’re measuring a sum of all the things that can happen in between. That means our particle could oscillate back and forth between different flavors many many times, and we need to take every possibility into account. Because of that, it doesn’t actually make sense to ask what the mass is for one flavor, for just electron neutrinos or just muon neutrinos. Instead, mass is for the thing that actually moves: an average (actually, a quantum superposition) over all the different flavors, oscillating back and forth any number of times.

When a process like beta decay produces an electron neutrino, the thing that actually moves is a mix (again, a superposition) of particles with these different masses. Because each of these masses respond to their initial “kick” in different ways, you see different proportions of them over time. Try to measure different flavors at the end, and you’ll find different ones depending on when and where you measure. That’s the oscillation effect, and that’s why it means that neutrinos have mass.

It’s a bit more complicated to work out the math behind this, but not unreasonably so: it’s simpler than a lot of other physics calculations. Working through the math, we find that by measuring how long it takes neutrinos to oscillate we can calculate the differences between (squares of) neutrino masses. What we can’t calculate are the masses themselves. We know they’re small: neutrinos travel at almost the speed of light, and our cosmological models of the universe have surprisingly little room for massive neutrinos: too much mass, and our universe would look very different than it does today. But we don’t know much more than that. We don’t even know the order of the masses: you might assume electron neutrinos are on average lighter than muon neutrinos, which are lighter than tau neutrinos…but it could easily be the other way around! We also don’t know whether neutrinos get their mass from the Higgs like other particles do, or if they work in a completely different way.

Unlike other mysteries of physics, we’ll likely have the answer to some of these questions soon. People are already picking through the data from current experiments, seeing if they hint towards one order of masses or the other, or to one or the other way for neutrinos to get their mass. More experiments will start taking data this year, and others are expected to start later this decade. At some point, the textbooks may well have more “normal” mass numbers for each of the neutrinos. But until then, they serve as a nice illustration of what mass actually means in particle physics.

Alice Through the Parity Glass

When you look into your mirror in the morning, the face looking back at you isn’t exactly your own. Your mirror image is flipped: left-handed if you’re right-handed, and right-handed if you’re left-handed. Your body is not symmetric in the mirror: we say it does not respect parity symmetry. Zoom in, and many of the molecules in your body also have a “handedness” to them: biology is not the same when flipped in a mirror.

What about physics? At first, you might expect the laws of physics themselves to respect parity symmetry. Newton’s laws are the same when reflected in a mirror, and so are Maxwell’s. But one part of physics breaks this rule: the weak nuclear force, the force that causes nuclear beta decay. The weak nuclear force interacts differently with “right-handed” and “left-handed” particles (shorthand for particles that spin counterclockwise or clockwise with respect to their motion). This came as a surprise to most physicists, but it was predicted by Tsung-Dao Lee and Chen-Ning Yang and demonstrated in 1956 by Chien-Shiung Wu, known in her day as the “Queen of Nuclear Research”. The world really does look different when flipped in a mirror.

I gave a lecture on the weak force for the pedagogy course I took a few weeks back. One piece of feedback I got was that the topic wasn’t very relatable. People wanted to know why they should care about the handedness of the weak force, they wanted to hear about “real-life” applications. Once scientists learned that the weak force didn’t respect parity, what did that let us do?

Thinking about this, I realized this is actually a pretty tricky story to tell. With enough time and background, I could explain that the “handedness” of the Standard Model is a major constraint on attempts to unify physics, ruling out a lot of the simpler options. That’s hard to fit in a short lecture though, and it still isn’t especially close to “real life”.

Then I realized I don’t need to talk about “real life” to give a “real-life example”. People explaining relativity get away with science fiction scenarios, spaceships on voyages to black holes. The key isn’t to be familiar, just relatable. If I can tell a story (with people in it), then maybe I can make this work.

All I need, then, is a person who cares a lot about the world behind a mirror.

Curiouser and curiouser…

When Alice goes through the looking glass in the novel of that name, she enters a world flipped left-to-right, a world with its parity inverted. Following Alice, we have a natural opportunity to explore such a world. Others have used this to explore parity symmetry in biology: for example, a side-plot in Alan Moore’s League of Extraordinary Gentlemen sees Alice come back flipped, and starve when she can’t process mirror-reversed nutrients. I haven’t seen it explored for physics, though.

In order to make this story work, we have to get Alice to care about the weak nuclear force. The most familiar thing the weak force does is cause beta decay. And the most familiar thing that undergoes beta decay is a banana. Bananas contain radioactive potassium, which can transform to calcium by emitting an electron and an anti-electron-neutrino.

The radioactive potassium from a banana doesn’t stay in the body very long, only a few hours at most. But if Alice was especially paranoid about radioactivity, maybe she would want to avoid eating bananas. (We shouldn’t tell her that other foods contain potassium too.) If so, she might view the looking glass as a golden opportunity, a chance to eat as many bananas as she likes without worrying about radiation.

Does this work?

A first problem: can Alice even eat mirror-reversed bananas? I told you many biological molecules have handedness, which led Alan Moore’s version of Alice to starve. If we assume, unlike Moore, that Alice comes back in her original configuration and survives, we should still ask if she gets any benefit out of the bananas in the looking glass.

Researching this, I found that the main thing that makes bananas taste “banana-ish”, isoamyl acetate, does not have handedness: mirror bananas will still taste like bananas. Fructose, a sugar in bananas, does have handedness however: it isn’t the same when flipped in a mirror. Chatting with a chemist, the impression I got was that this isn’t a total loss: often, flipping a sugar results in another, different sugar. A mirror banana might still taste sweet, but less so. Overall, it may still be worth eating.

The next problem is a tougher one: flipping a potassium atom doesn’t actually make it immune to the weak force. The weak force only interacts with left-handed particles and right-handed antiparticles: in beta decay, it transforms a left-handed down quark to a left-handed up quark, producing a left-handed electron and a right-handed anti-neutrino.

Alice would have been fine if all of the quarks in potassium were left-handed, but they aren’t: an equal amount are right-handed, so the mirror weak force will still act on them, and they will still undergo beta decay. Actually, it’s worse than that: quarks, and massive particles in general, don’t actually have a definite handedness. If you speed up enough to catch up to a quark and pass it, then from your perspective it’s now going in the opposite direction, and its handedness is flipped. The only particles with definite handedness are massless particles: those go at the speed of light, so you can never catch up to them. Another way to think about this is that quarks get their mass from the Higgs field, and this happens because the Higgs lets left- and right-handed quarks interact. What we call the quark’s mass is in some sense just left- and right-handed quarks constantly mixing back and forth.

Alice does have the opportunity to do something interesting here, if she can somehow capture the anti-neutrinos from those bananas. Our world appears to only have left-handed neutrinos and right-handed anti-neutrinos. This seemed reasonable when we thought neutrinos were massless, but now we know neutrinos have a (very small) mass. As a result, the hunt is on for right-handed neutrinos or left-handed anti-neutrinos: if we can measure them, we could fix one of the lingering mysteries of the Standard Model. With this in mind, Alice has the potential to really confuse some particle physicists, giving them some left-handed anti-neutrinos from beyond the looking-glass.

It turns out there’s a problem with even this scheme, though. The problem is a much wider one: the whole story is physically inconsistent.

I’d been acting like Alice can pass back and forth through the mirror, carrying all her particles with her. But what are “her particles”? If she carries a banana through the mirror, you might imagine the quarks in the potassium atoms carry over. But those quarks are constantly exchanging other quarks and gluons, as part of the strong force holding them together. They’re also exchanging photons with electrons via the electromagnetic force, and they’re also exchanging W bosons via beta decay. In quantum field theory, all of this is in some sense happening at once, an infinite sum over all possible exchanges. It doesn’t make sense to just carve out one set of particles and plug them in to different fields somewhere else.

If we actually wanted to describe a mirror like Alice’s looking glass in physics, we’d want to do it consistently. This is similar to how physicists think of time travel: you can’t go back in time and murder your grandparents because your whole path in space-time has to stay consistent. You can only go back and do things you “already did”. We treat space in a similar way to time. A mirror like Alice’s imposes a condition, that fields on one side are equal to their mirror image on the other side. Conditions like these get used in string theory on occasion, and they have broad implications for physics on the whole of space-time, not just near the boundary. The upshot is that a world with a mirror like Alice’s in it would be totally different from a world without the looking glass: the weak force as we know it would not exist.

So unfortunately, I still don’t have a good “real life” story for a class about parity symmetry. It’s fun trying to follow Alice through a parity transformation, but there are a few too many problems for the tale to make any real sense. Feel free to suggest improvements!