Tag Archives: quantum field theory

At Mikefest

I’m at a conference this week of a very particular type: a birthday conference. When folks in my field turn 60, their students and friends organize a special conference for them, celebrating their research legacy. With COVID restrictions just loosening, my advisor Michael Douglas is getting a last-minute conference. And as one of the last couple students he graduated at Stony Brook, I naturally showed up.

The conference, Mikefest, is at the Institut des Hautes Études Scientifiques, just outside of Paris. Mike was a big supporter of the IHES, putting in a lot of fundraising work for them. Another big supporter, James Simons, was Mike’s employer for a little while after his time at Stony Brook. The conference center we’re meeting in is named for him.

You might have to zoom in to see that, though.

I wasn’t involved in organizing the conference, so it was interesting seeing differences between this and other birthday conferences. Other conferences focus on the birthday prof’s “family tree”: their advisor, their students, and some of their postdocs. We’ve had several talks from Mike’s postdocs, and one from his advisor, but only one from a student. Including him and me, three of Mike’s students are here: another two have had their work mentioned but aren’t speaking or attending.

Most of the speakers have collaborated with Mike, but only for a few papers each. All of them emphasized a broader debt though, for discussions and inspiration outside of direct collaboration. The message, again and again, is that Mike’s work has been broad enough to touch a wide range of people. He’s worked on branes and the landscape of different string theory universes, pure mathematics and computation, neuroscience and recently even machine learning. The talks generally begin with a few anecdotes about Mike, before pivoting into research talks on the speakers’ recent work. The recent-ness of the work is perhaps another difference from some birthday conferences: as one speaker said, this wasn’t just a celebration of Mike’s past, but a “welcome back” after his return from the finance world.

One thing I don’t know is how much this conference might have been limited by coming together on short notice. For other birthday conferences impacted by COVID (and I’m thinking of one in particular), it might be nice to have enough time to have most of the birthday prof’s friends and “academic family” there in person. As-is, though, Mike seems to be having fun regardless.

Happy Birthday Mike!

Geometry and Geometry

Last week, I gave the opening lectures for a course on scattering amplitudes, the things we compute to find probabilities in particle physics. After the first class, one of the students asked me if two different descriptions of these amplitudes, one called CHY and the other called the amplituhedron, were related. There does happen to be a connection, but it’s a bit subtle and indirect, not the sort of thing the student would have been thinking of. Why then, did he think they might be related? Well, he explained, both descriptions are geometric.

If you’ve been following this blog for a while, you’ve seen me talk about misunderstandings. There are a lot of subtle ways a smart student can misunderstand something, ways that can be hard for a teacher to recognize. The right question, or the right explanation, can reveal what’s going on. Here, I think the problem was that there are multiple meanings of geometry.

One of the descriptions the student asked about, CHY, is related to string theory. It describes scattering particles in terms of the path of a length of string through space and time. That path draws out a surface called a world-sheet, showing all the places the string touches on its journey. And that picture, of a wiggly surface drawn in space and time, looks like what most people think of as geometry: a “shape” in a pretty normal sense, which here describes the physics of scattering particles.

The other description, the amplituhedron, also uses geometric objects to describe scattering particles. But the “geometric objects” here are much more abstract. A few of them are familiar: straight lines, the area between them forming shapes on a plane. Most of them, though are generalizations of this: instead of lines on a plane, they have higher dimensional planes in higher dimensional spaces. These too get described as geometry, even though they aren’t the “everyday” geometry you might be familiar with. Instead, they’re a “natural generalization”, something that, once you know the math, is close enough to that “everyday” geometry that it deserves the same name.

This week, two papers presented a totally different kind of geometric description of particle physics. In those papers, “geometric” has to do with differential geometry, the mathematics behind Einstein’s theory of general relativity. The descriptions are geometric because they use the same kinds of building-blocks of that theory, a metric that bends space and time. Once again, this kind of geometry is a natural generalization of the everyday notion, but now in once again a different way.

All of these notions of geometry do have some things in common, of course. Maybe you could even write down a definition of “geometry” that includes all of them. But they’re different enough that if I tell you that two descriptions are “geometric”, it doesn’t tell you all that much. It definitely doesn’t tell you the two descriptions are related.

It’s a reasonable misunderstanding, though. It comes from a place where, used to “everyday” geometry, you expect two “geometric descriptions” of something to be similar: shapes moving in everyday space, things you can directly compare. Instead, a geometric description can be many sorts of shape, in many sorts of spaces, emphasizing many sorts of properties. “Geometry” is just a really broad term.

Book Review: The Joy of Insight

There’s something endlessly fascinating about the early days of quantum physics. In a century, we went from a few odd, inexplicable experiments to a practically complete understanding of the fundamental constituents of matter. Along the way the new ideas ended a world war, almost fueled another, and touched almost every field of inquiry. The people lucky enough to be part of this went from familiarly dorky grad students to architects of a new reality. Victor Weisskopf was one of those people, and The Joy of Insight: Passions of a Physicist is his autobiography.

Less well-known today than his contemporaries, Weisskopf made up for it with a front-row seat to basically everything that happened in particle physics. In the late 20’s and early 30’s he went from studying in Göttingen (including a crush on Maria Göppert before a car-owning Joe Mayer snatched her up) to a series of postdoctoral positions that would exhaust even a modern-day physicist, working in Leipzig, Berlin, Copenhagen, Cambridge, Zurich, and Copenhagen again, before fleeing Europe for a faculty position in Rochester, New York. During that time he worked for, studied under, collaborated or partied with basically everyone you might have heard of from that period. As a result, this section of the autobiography was my favorite, chock-full of stories, from the well-known (Pauli’s rudeness and mythical tendency to break experimental equipment) to the less-well known (a lab in Milan planned to prank Pauli with a door that would trigger a fake explosion when opened, which worked every time they tested it…and failed when Pauli showed up), to the more personal (including an in retrospect terrifying visit to the Soviet Union, where they asked him to critique a farming collective!) That era also saw his “almost Nobel”, in his case almost discovering the Lamb Shift.

Despite an “almost Nobel”, Weisskopf was paid pretty poorly when he arrived in Rochester. His story there puts something I’d learned before about another refugee physicist, Hertha Sponer, in a new light. Sponer’s university also didn’t treat her well, and it seemed reminiscent of modern academia. Weisskopf, though, thinks his treatment was tied to his refugee status: that, aware that they had nowhere else to go, universities gave the scientists who fled Europe worse deals than they would have in a Nazi-less world, snapping up talent for cheap. I could imagine this was true for Sponer as well.

Like almost everyone with the relevant expertise, Weisskopf was swept up in the Manhattan project at Los Alamos. There he rose in importance, both in the scientific effort (becoming deputy leader of the theoretical division) and the local community (spending some time on and chairing the project’s “town council”). Like the first sections, this surreal time leads to a wealth of anecdotes, all fascinating. In his descriptions of the life there I can see the beginnings of the kinds of “hiking retreats” physicists would build in later years, like the one at Aspen, that almost seem like attempts to recreate that kind of intense collaboration in an isolated natural place.

After the war, Weisskopf worked at MIT before a stint as director of CERN. He shepherded the facility’s early days, when they were building their first accelerators and deciding what kinds of experiments to pursue. I’d always thought that the “nuclear” in CERN’s name was an artifact of the times, when “nuclear” and “particle” physics were thought of as the same field, but according to Weisskopf the fields were separate and it was already a misnomer when the place was founded. Here the book’s supply of anecdotes becomes a bit more thin, and instead he spends pages on glowing descriptions of people he befriended. The pattern continues after the directorship as his duties get more administrative, spending time as head of the physics department at MIT and working on arms control, some of the latter while a member of the Pontifical Academy of Sciences (which apparently even a Jewish atheist can join). He does work on some science, though, collaborating on the “bag of quarks” model of protons and neutrons. He lives to see the fall of the Berlin wall, and the end of the book has a bit of 90’s optimism to it, the feeling that finally the conflicts of his life would be resolved. Finally, the last chapter abandons chronology altogether, and is mostly a list of his opinions of famous composers, capped off with a Bohr-inspired musing on the complementary nature of science and the arts, humanities, and religion.

One of the things I found most interesting in this book was actually something that went unsaid. Weisskopf’s most famous student was Murray Gell-Mann, a key player in the development of the theory of quarks (including coining the name). Gell-Mann was famously cultured (in contrast to the boorish-almost-as-affectation Feynman) with wide interests in the humanities, and he seems like exactly the sort of person Weisskopf would have gotten along with. Surprisingly though, he gets no anecdotes in this book, and no glowing descriptions: just a few paragraphs, mostly emphasizing how smart he was. I have to wonder if there was some coldness between them. Maybe Weisskopf had difficulty with a student who became so famous in his own right, or maybe they just never connected. Maybe Weisskopf was just trying to be generous: the other anecdotes in that part of the book are of much less famous people, and maybe Weisskopf wanted to prioritize promoting them, feeling that they were underappreciated.

Weisskopf keeps the physics light to try to reach a broad audience. This means he opts for short explanations, and often these are whatever is easiest to reach for. It creates some interesting contradictions: the way he describes his “almost Nobel” work in quantum electrodynamics is very much the way someone would have described it at the time, but very much not how it would be understood later, and by the time he talks about the bag of quarks model his more modern descriptions don’t cleanly link with what he said earlier. Overall, his goal isn’t really to explain the physics, but to explain the physicists. I enjoyed the book for that: people do it far too rarely, and the result was a really fun read.

Duality and Emergence: When Is Spacetime Not Spacetime?

Spacetime is doomed! At least, so say some physicists. They don’t mean this as a warning, like some comic-book universe-destroying disaster, but rather as a research plan. These physicists believe that what we think of as space and time aren’t the full story, but that they emerge from something more fundamental, so that an ultimate theory of nature might not use space or time at all. Other, grumpier physicists are skeptical. Joined by a few philosophers, they think the “spacetime is doomed” crowd are over-excited and exaggerating the implications of their discoveries. At the heart of the argument is the distinction between two related concepts: duality and emergence.

In physics, sometimes we find that two theories are actually dual: despite seeming different, the patterns of observations they predict are the same. Some of the more popular examples are what we call holographic theories. In these situations, a theory of quantum gravity in some space-time is dual to a theory without gravity describing the edges of that space-time, sort of like how a hologram is a 2D image that looks 3D when you move it. For any question you can ask about the gravitational “bulk” space, there is a matching question on the “boundary”. No matter what you observe, neither description will fail.

If theories with gravity can be described by theories without gravity, does that mean gravity doesn’t really exist? If you’re asking that question, you’re asking whether gravity is emergent. An emergent theory is one that isn’t really fundamental, but instead a result of the interaction of more fundamental parts. For example, hydrodynamics, the theory of fluids like water, emerges from more fundamental theories that describe the motion of atoms and molecules.

(For the experts: I, like most physicists, am talking about “weak emergence” here, not “strong emergence”.)

The “spacetime is doomed” crowd think that not just gravity, but space-time itself is emergent. They expect that distances and times aren’t really fundamental, but a result of relationships that will turn out to be more fundamental, like entanglement between different parts of quantum fields. As evidence, they like to bring up dualities where the dual theories have different concepts of gravity, number of dimensions, or space-time. Using those theories, they argue that space and time might “break down”, and not be really fundamental.

(I’ve made arguments like that in the past too.)

The skeptics, though, bring up an important point. If two theories are really dual, then no observation can distinguish them: they make exactly the same predictions. In that case, say the skeptics, what right do you have to call one theory more fundamental than the other? You can say that gravity emerges from a boundary theory without gravity, but you could just as easily say that the boundary theory emerges from the gravity theory. The whole point of duality is that no theory is “more true” than the other: one might be more or less convenient, but both describe the same world. If you want to really argue for emergence, then your “more fundamental” theory needs to do something extra: to predict something that your emergent theory doesn’t predict.

Sometimes this is a fair objection. There are members of the “spacetime is doomed” crowd who are genuinely reckless about this, who’ll tell a journalist about emergence when they really mean duality. But many of these people are more careful, and have thought more deeply about the question. They tend to have some mix of these two perspectives:

First, if two descriptions give the same results, then do the descriptions matter? As physicists, we have a history of treating theories as the same if they make the same predictions. Space-time itself is a result of this policy: in the theory of relativity, two people might disagree on which one of two events happened first or second, but they will agree on the overall distance in space-time between the two. From this perspective, a duality between a bulk theory and a boundary theory isn’t evidence that the bulk theory emerges from the boundary, but it is evidence that both the bulk and boundary theories should be replaced by an “overall theory”, one that treats bulk and boundary as irrelevant descriptions of the same physical reality. This perspective is similar to an old philosophical theory called positivism: that statements are meaningless if they cannot be derived from something measurable. That theory wasn’t very useful for philosophers, which is probably part of why some philosophers are skeptics of “space-time is doomed”. The perspective has been quite useful to physicists, though, so we’re likely to stick with it.

Second, some will say that it’s true that a dual theory is not an emergent theory…but it can be the first step to discover one. In this perspective, dualities are suggestive evidence that a deeper theory is waiting in the wings. The idea would be that one would first discover a duality, then discover situations that break that duality: examples on one side that don’t correspond to anything sensible on the other. Maybe some patterns of quantum entanglement are dual to a picture of space-time, but some are not. (Closer to my sub-field, maybe there’s an object like the amplituhedron that doesn’t respect locality or unitarity.) If you’re lucky, maybe there are situations, or even experiments, that go from one to the other: where the space-time description works until a certain point, then stops working, and only the dual description survives. Some of the models of emergent space-time people study are genuinely of this type, where a dimension emerges in a theory that previously didn’t have one. (For those of you having a hard time imagining this, read my old post about “bubbles of nothing”, then think of one happening in reverse.)

It’s premature to say space-time is doomed, at least as a definite statement. But it is looking like, one way or another, space-time won’t be the right picture for fundamental physics. Maybe that’s because it’s equivalent to another description, redundant embellishment on an essential theoretical core. Maybe instead it breaks down, and a more fundamental theory could describe more situations. We don’t know yet. But physicists are trying to figure it out.

The arXiv SciComm Challenge

Fellow science communicators, think you can explain everything that goes on in your field? If so, I have a challenge for you. Pick a day, and go through all the new papers on arXiv.org in a single area. For each one, try to give a general-audience explanation of what the paper is about. To make it easier, you can ignore cross-listed papers. If your field doesn’t use arXiv, consider if you can do the challenge with another appropriate site.

I’ll start. I’m looking at papers in the “High Energy Physics – Theory” area, announced 6 Jan, 2022. I’ll warn you in advance that I haven’t read these papers, just their abstracts, so apologies if I get your paper wrong!

arXiv:2201.01303 : Holographic State Complexity from Group Cohomology

This paper says it is a contribution to a Proceedings. That means it is based on a talk given at a conference. In my field, a talk like this usually won’t be presenting new results, but instead summarizes results in a previous paper. So keep that in mind.

There is an idea in physics called holography, where two theories are secretly the same even though they describe the world with different numbers of dimensions. Usually this involves a gravitational theory in a “box”, and a theory without gravity that describes the sides of the box. The sides turn out to fully describe the inside of the box, much like a hologram looks 3D but can be printed on a flat sheet of paper. Using this idea, physicists have connected some properties of gravity to properties of the theory on the sides of the box. One of those properties is complexity: the complexity of the theory on the sides of the box says something about gravity inside the box, in particular about the size of wormholes. The trouble is, “complexity” is a bit subjective: it’s not clear how to give a good definition for it for this type of theory. In this paper, the author studies a theory with a precise mathematical definition, called a topological theory. This theory turns out to have mathematical properties that suggest a well-defined notion of complexity for it.

arXiv:2201.01393 : Nonrelativistic effective field theories with enhanced symmetries and soft behavior

We sometimes describe quantum field theory as quantum mechanics plus relativity. That’s not quite true though, because it is possible to define a quantum field theory that doesn’t obey special relativity, a non-relativistic theory. Physicists do this if they want to describe a system moving much slower than the speed of light: it gets used sometimes for nuclear physics, and sometimes for modeling colliding black holes.

In particle physics, a “soft” particle is one with almost no momentum. We can classify theories based on how they behave when a particle becomes more and more soft. In normal quantum field theories, if they have special behavior when a particle becomes soft it’s often due to a symmetry of the theory, where the theory looks the same even if something changes. This paper shows that this is not true for non-relativistic theories: they have more requirements to have special soft behavior, not just symmetry. They “bootstrap” a few theories, using some general restrictions to find them without first knowing how they work (“pulling them up by their own bootstraps”), and show that the theories they find are in a certain sense unique, the only theories of that kind.

arXiv:2201.01552 : Transmutation operators and expansions for 1-loop Feynman integrands

In recent years, physicists in my sub-field have found new ways to calculate the probability that particles collide. One of these methods describes ordinary particles in a way resembling string theory, and from this discovered a whole “web” of theories that were linked together by small modifications of the method. This method originally worked only for the simplest Feynman diagrams, the “tree” diagrams that correspond to classical physics, but was extended to the next-simplest diagrams, diagrams with one “loop” that start incorporating quantum effects.

This paper concerns a particular spinoff of this method, that can find relationships between certain one-loop calculations in a particularly efficient way. It lets you express calculations of particle collisions in a variety of theories in terms of collisions in a very simple theory. Unlike the original method, it doesn’t rely on any particular picture of how these collisions work, either Feynman diagrams or strings.

arXiv:2201.01624 : Moduli and Hidden Matter in Heterotic M-Theory with an Anomalous U(1) Hidden Sector

In string theory (and its more sophisticated cousin M theory), our four-dimensional world is described as a world with more dimensions, where the extra dimensions are twisted up so that they cannot be detected. The shape of the extra dimensions influences the kinds of particles we can observe in our world. That shape is described by variables called “moduli”. If those moduli are stable, then the properties of particles we observe would be fixed, otherwise they would not be. In general it is a challenge in string theory to stabilize these moduli and get a world like what we observe.

This paper discusses shapes that give rise to a “hidden sector”, a set of particles that are disconnected from the particles we know so that they are hard to observe. Such particles are often proposed as a possible explanation for dark matter. This paper calculates, for a particular kind of shape, what the masses of different particles are, as well as how different kinds of particles can decay into each other. For example, a particle that causes inflation (the accelerating expansion of the universe) can decay into effects on the moduli and dark matter. The paper also shows how some of the moduli are made stable in this picture.

arXiv:2201.01630 : Chaos in Celestial CFT

One variant of the holography idea I mentioned earlier is called “celestial” holography. In this picture, the sides of the box are an infinite distance away: a “celestial sphere” depicting the angles particles go after they collide, in the same way a star chart depicts the angles between stars. Recent work has shown that there is something like a sensible theory that describes physics on this celestial sphere, that contains all the information about what happens inside.

This paper shows that the celestial theory has a property called quantum chaos. In physics, a theory is said to be chaotic if it depends very precisely on its initial conditions, so that even a small change will result in a large change later (the usual metaphor is a butterfly flapping its wings and causing a hurricane). This kind of behavior appears to be present in this theory.

arXiv:2201.01657 : Calculations of Delbrück scattering to all orders in αZ

Delbrück scattering is an effect where the nuclei of heavy elements like lead can deflect high-energy photons, as a consequence of quantum field theory. This effect is apparently tricky to calculate, and previous calculations have involved approximations. This paper finds a way to calculate the effect without those approximations, which should let it match better with experiments.

(As an aside, I’m a little confused by the claim that they’re going to all orders in αZ when it looks like they just consider one-loop diagrams…but this is probably just my ignorance, this is a corner of the field quite distant from my own.)

arXiv:2201.01674 : On Unfolded Approach To Off-Shell Supersymmetric Models

Supersymmetry is a relationship between two types of particles: fermions, which typically make up matter, and bosons, which are usually associated with forces. In realistic theories this relationship is “broken” and the two types of particles have different properties, but theoretical physicists often study models where supersymmetry is “unbroken” and the two types of particles have the same mass and charge. This paper finds a new way of describing some theories of this kind that reorganizes them in an interesting way, using an “unfolded” approach in which aspects of the particles that would normally be combined are given their own separate variables.

(This is another one I don’t know much about, this is the first time I’d heard of the unfolded approach.)

arXiv:2201.01679 : Geometric Flow of Bubbles

String theorists have conjectured that only some types of theories can be consistently combined with a full theory of quantum gravity, others live in a “swampland” of non-viable theories. One set of conjectures characterizes this swampland in terms of “flows” in which theories with different geometry can flow in to each other. The properties of these flows are supposed to be related to which theories are or are not in the swampland.

This paper writes down equations describing these flows, and applies them to some toy model “bubble” universes.

arXiv:2201.01697 : Graviton scattering amplitudes in first quantisation

This paper is a pedagogical one, introducing graduate students to a topic rather than presenting new research.

Usually in quantum field theory we do something called “second quantization”, thinking about the world not in terms of particles but in terms of fields that fill all of space and time. However, sometimes one can instead use “first quantization”, which is much more similar to ordinary quantum mechanics. There you think of a single particle traveling along a “world-line”, and calculate the probability it interacts with other particles in particular ways. This approach has recently been used to calculate interactions of gravitons, particles related to the gravitational field in the same way photons are related to the electromagnetic field. The approach has some advantages in terms of simplifying the results, which are described in this paper.

Classicality Has Consequences

Last week, I mentioned some interesting new results in my corner of physics. I’ve now finally read the two papers and watched the recorded talk, so I can satisfy my frustrated commenters.

Quantum mechanics is a very cool topic and I am much less qualified than you would expect to talk about it. I use quantum field theory, which is based on quantum mechanics, so in some sense I use quantum mechanics every day. However, most of the “cool” implications of quantum mechanics don’t come up in my work. All the debates about whether measurement “collapses the wavefunction” are irrelevant when the particles you measure get absorbed in a particle detector, never to be seen again. And while there are deep questions about how a classical world emerges from quantum probabilities, they don’t matter so much when all you do is calculate those probabilities.

They’ve started to matter, though. That’s because quantum field theorists like me have recently started working on a very different kind of problem: trying to predict the output of gravitational wave telescopes like LIGO. It turns out you can do almost the same kind of calculation we’re used to: pretend two black holes or neutron stars are sub-atomic particles, and see what happens when they collide. This trick has grown into a sub-field in its own right, one I’ve dabbled in a bit myself. And it’s gotten my kind of physicists to pay more attention to the boundary between classical and quantum physics.

The thing is, the waves that LIGO sees really are classical. Any quantum gravity effects there are tiny, undetectably tiny. And while this doesn’t have the implications an expert might expect (we still need loop diagrams), it does mean that we need to take our calculations to a classical limit.

Figuring out how to do this has been surprisingly delicate, and full of unexpected insight. A recent example involves two papers, one by Andrea Cristofoli, Riccardo Gonzo, Nathan Moynihan, Donal O’Connell, Alasdair Ross, Matteo Sergola, and Chris White, and one by Ruth Britto, Riccardo Gonzo, and Guy Jehu. At first I thought these were two groups happening on the same idea, but then I noticed Riccardo Gonzo on both lists, and realized the papers were covering different aspects of a shared story. There is another group who happened upon the same story: Paolo Di Vecchia, Carlo Heissenberg, Rodolfo Russo and Gabriele Veneziano. They haven’t published yet, so I’m basing this on the Gonzo et al papers.

The key question each group asked was, what does it take for gravitational waves to be classical? One way to ask the question is to pick something you can observe, like the strength of the field, and calculate its uncertainty. Classical physics is deterministic: if you know the initial conditions exactly, you know the final conditions exactly. Quantum physics is not. What should happen is that if you calculate a quantum uncertainty and then take the classical limit, that uncertainty should vanish: the observation should become certain.

Another way to ask is to think about the wave as made up of gravitons, particles of gravity. Then you can ask how many gravitons are in the wave, and how they are distributed. It turns out that you expect them to be in a coherent state, like a laser, one with a very specific distribution called a Poisson distribution: a distribution in some sense right at the border between classical and quantum physics.

The results of both types of questions were as expected: the gravitational waves are indeed classical. To make this work, though, the quantum field theory calculation needs to have some surprising properties.

If two black holes collide and emit a gravitational wave, you could depict it like this:

All pictures from arXiv:2112.07556

where the straight lines are black holes, and the squiggly line is a graviton. But since gravitational waves are made up of multiple gravitons, you might ask, why not depict it with two gravitons, like this?

It turns out that diagrams like that are a problem: they mean your two gravitons are correlated, which is not allowed in a Poisson distribution. In the uncertainty picture, they also would give you non-zero uncertainty. Somehow, in the classical limit, diagrams like that need to go away.

And at first, it didn’t look like they do. You can try to count how many powers of Planck’s constant show up in each diagram. The authors do that, and it certainly doesn’t look like it goes away:

An example from the paper with Planck’s constants sprinkled around

Luckily, these quantum field theory calculations have a knack for surprising us. Calculate each individual diagram, and things look hopeless. But add them all together, and they miraculously cancel. In the classical limit, everything combines to give a classical result.

You can do this same trick for diagrams with more graviton particles, as many as you like, and each time it ought to keep working. You get an infinite set of relationships between different diagrams, relationships that have to hold to get sensible classical physics. From thinking about how the quantum and classical are related, you’ve learned something about calculations in quantum field theory.

That’s why these papers caught my eye. A chunk of my sub-field is needing to learn more and more about the relationship between quantum and classical physics, and it may have implications for the rest of us too. In the future, I might get a bit more qualified to talk about some of the very cool implications of quantum mechanics.

Discovering New Elements, Discovering New Particles

In school, you learn that the world around you is made up of chemical elements. There’s oxygen and nitrogen in the air, hydrogen and oxygen in water, sodium and chlorine in salt, and carbon in all living things. Other elements are more rare. Often, that’s because they’re unstable, due to radioactivity, like the plutonium in a bomb or americium in a smoke detector. The heaviest elements are artificial, produced in tiny amounts by massive experiments. In 2002, the heaviest element yet was found at the Joint Institute for Nuclear Research near Moscow. It was later named Oganesson, after the scientist who figured out how to make these heavy elements, Yuri Oganessian. To keep track of the different elements, we organize them in the periodic table like this:

In that same school, you probably also learn that the elements aren’t quite so elementary. Unlike the atoms imagined by the ancient Greeks, our atoms are made of smaller parts: protons and neutrons, surrounded by a cloud of electrons. They’re what give the periodic table its periodic structure, the way it repeats from row to row, with each different element having a different number of protons.

If your school is a bit more daring, you also learn that protons and neutrons themselves aren’t elementary. Each one is made of smaller particles called quarks: a proton of two “up quarks” and one “down quark”, and a neutron of two “down” and one “up”. Up quarks, down quarks, and electrons are all what physicists call fundamental particles, and they make up everything you see around you. Just like the chemical elements, some fundamental particles are more obscure than others, and the heaviest ones are all very unstable, produced temporarily by particle collider experiments. The most recent particle to be discovered was in 2012, when the Large Hadron Collider near Geneva found the Higgs boson. The Higgs boson is named after Peter Higgs, one of those who predicted it back in the 60’s. All the fundamental particles we know are part of something called the Standard Model, which we sometimes organize in a table like this:

So far, these stories probably sound similar. The experiments might not even sound that different: the Moscow experiment shoots a beam of high-energy calcium atoms at a target of heavy radioactive elements, while the Geneva one shoots a beam of high-energy protons at another beam of high-energy protons. With all those high-energy beams, what’s the difference really?

In fact there is a big different between chemical elements and fundamental particles, and between the periodic table and the Standard Model. The latter are fundamental, the former are not.

When they made new chemical elements, scientists needed to start with a lot of protons and neutrons. That’s why they used calcium atoms in their beam, and even heavier elements as their target. We know that heavy elements are heavy because they contain more protons and neutrons, and we can use the arrangement of those protons and neutrons to try to predict their properties. That’s why, even though only five or six oganesson atoms have been detected, scientists have some idea what kind of material it would make. Oganesson is a noble gas, like helium, neon, and radon. But calculations predict it is actually a solid at room temperature. What’s more, it’s expected to be able to react with other elements, something the other noble gases are very reluctant to do.

The Standard Model has patterns, just like the chemical elements. Each matter particle is one of three “generations”, each heavier and more unstable: for example, electrons have heavier relatives called muons, and still heavier ones called tauons. But unlike with the elements, we don’t know where these patterns come from. We can’t explain them with smaller particles, like we could explain the elements with protons and neutrons. We think the Standard Model particles might actually be fundamental, not made of anything smaller.

That’s why when we make them, we don’t need a lot of other particles: just two protons, each made of three quarks, is enough. With that, we can make not just new arrangements of quarks, but new particles altogether. Some are even heavier than the protons we started with: the Higgs boson is more than a hundred times as heavy as a proton! We can do this because, in particle physics, mass isn’t conserved: mass is just another type of energy, and you can turn one type of energy into another.

Discovering new elements is hard work, but discovering new particles is on another level. It’s hard to calculate which elements are stable or unstable, and what their properties might be. But we know the rules, and with enough skill and time we could figure it out. In particle physics, we don’t know the rules. We have some good guesses, simple models to solve specific problems, and sometimes, like with the Higgs, we’re right. But despite making many more than five or six Higgs bosons, we still aren’t sure it has the properties we expect. We don’t know the rules. Even with skill and time, we can’t just calculate what to expect. We have to discover it.

Searching for Stefano

On Monday, Quanta magazine released an article on a man who transformed the way we do particle physics: Stefano Laporta. I’d tipped them off that Laporta would make a good story: someone who came up with the bread-and-butter algorithm that fuels all of our computations, then vanished from the field for ten years, returning at the end with an 1,100 digit masterpiece. There’s a resemblance to Searching for Sugar Man, fans and supporters baffled that their hero is living in obscurity.

If anything, I worry I under-sold the story. When Quanta interviewed me, it was clear they were looking for ties to well-known particle physics results: was Laporta’s work necessary for the Higgs boson discovery, or linked to the controversy over the magnetic moment of the muon? I was careful, perhaps too careful, in answering. The Higgs, to my understanding, didn’t require so much precision for its discovery. As for the muon, the controversial part is a kind of calculation that wouldn’t use Laporta’s methods, while the un-controversial part was found numerically by a group that doesn’t use his algorithm either.

With more time now, I can make a stronger case. I can trace Laporta’s impact, show who uses his work and for what.

In particle physics, we have a lovely database called INSPIRE that lists all our papers. Here is Laporta’s page, his work sorted by number of citations. When I look today, I find his most cited paper, the one that first presented his algorithm, at the top, with a delightfully apt 1,001 citations. Let’s listen to a few of those 1,001 tales, and see what they tell us.

Once again, we’ll sort by citations. The top paper, “Higgs boson production at hadron colliders in NNLO QCD“, is from 2002. It computes the chance that a particle collider like the LHC could produce a Higgs boson. It in turn has over a thousand citations, headlined by two from the ATLAS and CMS collaborations: “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC” and “Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC“. Those are the papers that announced the discovery of the Higgs, each with more than twelve thousand citations. Later in the list, there are design reports: discussions of why the collider experiments are built a certain way. So while it’s true that the Higgs boson could be seen clearly from the data, Laporta’s work still had a crucial role: with his algorithm, we could reassure experimenters that they really found the Higgs (not something else), and even more importantly, help them design the experiment so that they could detect it.

The next paper tells a similar story. A different calculation, with almost as many citations, feeding again into planning and prediction for collider physics.

The next few touch on my own corner of the field. “New Relations for Gauge-Theory Amplitudes” triggered a major research topic in its own right, one with its own conference series. Meanwhile, “Iteration of planar amplitudes in maximally supersymmetric Yang-Mills theory at three loops and beyond” served as a foundation for my own career, among many others. None of this would have happened without Laporta’s algorithm.

After that, more applications: fundamental quantities for collider physics, pieces of math that are used again and again. In particular, they are referenced again and again by the Particle Data Group, who collect everything we know about particle physics.

Further down still, and we get to specific code: FIRE and Reduze, programs made by others to implement Laporta’s algorithm, each with many uses in its own right.

All that, just from one of Laporta’s papers.

His ten-year magnum opus is more recent, and has fewer citations: checking now, just 139. Still, there are stories to tell there too.

I mentioned earlier 1,100 digits, and this might confuse some of you. The most precise prediction in particle physics has ten digits of precision, the magnetic behavior of the electron. Laporta’s calculation didn’t change that, because what he calculated isn’t the only contribution: he calculated Feynman diagrams with four “loops”, which is its own approximation, one limited in precision by what might be contributed by more loops. The current result has Feynman diagrams with five loops as well (known to much less than 1,100 digits), but the diagrams with six or more are unknown, and can only be estimated. The result also depends on measurements, which themselves can’t reach 1,100 digits of precision.

So why would you want 1,100 digits, then? In a word, mathematics. The calculation involves exotic types of numbers called periods, more complicated cousins of numbers like pi. These numbers are related to each other, often in complicated and surprising ways, ways which are hard to verify without such extreme precision. An older result of Laporta’s inspired the physicist David Broadhurst and mathematician Anton Mellit to conjecture new relations between this type of numbers, relations that were only later proven using cutting-edge mathematics. The new result has inspired mathematicians too: Oliver Schnetz found hints of a kind of “numerical footprint”, special types of numbers tied to the physics of electrons. It’s a topic I’ve investigated myself, something I think could lead to much more efficient particle physics calculations.

In addition to being inspired by Laporta’s work, Broadhurst has advocated for it. He was the one who first brought my attention to Laporta’s story, with a moving description of welcoming him back to the community after his ten-year silence, writing a letter to help him get funding. I don’t have all the details of the situation, but the impression I get is that Laporta had virtually no academic support for those ten years: no salary, no students, having to ask friends elsewhere for access to computer clusters.

When I ask why someone with such an impact didn’t have a professorship, the answer I keep hearing is that he didn’t want to move away from his home town in Bologna. If you aren’t an academic, that won’t sound like much of an explanation: Bologna has a university after all, the oldest in the world. But that isn’t actually a guarantee of anything. Universities hire rarely, according to their own mysterious agenda. I remember another colleague whose wife worked for a big company. They offered her positions in several cities, including New York. They told her that, since New York has many universities, surely her husband could find a job at one of them? We all had a sad chuckle at that.

For almost any profession, a contribution like Laporta’s would let you live anywhere you wanted. That’s not true for academia, and it’s to our loss. By demanding that each scientist be able to pick up and move, we’re cutting talented people out of the field, filtering by traits that have nothing to do with our contributions to knowledge. I don’t know Laporta’s full story. But I do know that doing the work you love in the town you love isn’t some kind of unreasonable request. It’s a request academia should be better at fulfilling.

In Uppsala for Elliptics 2021

I’m in Uppsala in Sweden this week, at an actual in-person conference.

With actual blackboards!

Elliptics started out as a series of small meetings of physicists trying to understand how to make sense of elliptic integrals in calculations of colliding particles. It grew into a full-fledged yearly conference series. I organized last year, which naturally was an online conference. This year though, the stage was set for Uppsala University to host in person.

I should say mostly in person. It’s a hybrid conference, with some speakers and attendees joining on Zoom. Some couldn’t make it because of travel restrictions, or just wanted to be cautious about COVID. But seemingly just as many had other reasons, like teaching schedules or just long distances, that kept them from coming in person. We’re all wondering if this will become a long-term trend, where the flexibility of hybrid conferences lets people attend no matter their constraints.

The hybrid format worked better than expected, but there were still a few kinks. The audio was particularly tricky, it seemed like each day the organizers needed a new microphone setup to take questions. It’s always a little harder to understand someone on Zoom, especially when you’re sitting in an auditorium rather than focused on your own screen. Still, technological experience should make this work better in future.

Content-wise, the conference began with a “mini-school” of pedagogical talks on particle physics, string theory, and mathematics. I found the mathematical talks by Erik Panzer particularly nice, it’s a topic I still feel quite weak on and he laid everything out in a very clear way. It seemed like a nice touch to include a “school” element in the conference, though I worry it ate too much into the time.

The rest of the content skewed more mathematical, and more string-theoretic, than these conferences have in the past. The mathematical content ranged from intriguing (including an interesting window into what it takes to get high-quality numerics) to intimidatingly obscure (large commutative diagrams, category theory on the first slide). String theory was arguably under-covered in prior years, but it felt over-covered this year. With the particle physics talks focusing on either general properties with perhaps some connections to elliptics, or to N=4 super Yang-Mills, it felt like we were missing the more “practical” talks from past conferences, where someone was computing something concrete in QCD and told us what they needed. Next year is in Mainz, so maybe those talks will reappear.

Congratulations to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi!

The 2021 Nobel Prize in Physics was announced this week, awarded to Syukuro Manabe and Klaus Hasselmann for climate modeling and Giorgio Parisi for understanding a variety of complex physical systems.

Before this year’s prize was announced, I remember a few “water cooler chats” about who might win. No guess came close, though. The Nobel committee seems to have settled in to a strategy of prizes on a loosely linked “basket” of topics, with half the prize going to a prominent theorist and the other half going to two experimental, observational, or (in this case) computational physicists. It’s still unclear why they’re doing this, but regardless it makes it hard to predict what they’ll do next!

When I read the announcement, my first reaction was, “surely it’s not that Parisi?” Giorgio Parisi is known in my field for the Altarelli-Parisi equations (more properly known as the DGLAP equations, the longer acronym because, as is often the case in physics, the Soviets got there first). These equations are in some sense why the scattering amplitudes I study are ever useful at all. I calculate collisions of individual fundamental particles, like quarks and gluons, but a real particle collider like the LHC collides protons. Protons are messy, interacting combinations of quarks and gluons. When they collide you need not merely the equations describing colliding quarks and gluons, but those that describe their messy dynamics inside the proton, and in particular how those dynamics look different for experiments with different energies. The equation that describes that is the DGLAP equation.

As it turns out, Parisi is known for a lot more than the DGLAP equation. He is best known for his work on “spin glasses”, models of materials where quantum spins try to line up with each other, never quite settling down. He also worked on a variety of other complex systems, including flocks of birds!

I don’t know as much about Manabe and Hasselmann’s work. I’ve only seen a few talks on the details of climate modeling. I’ve seen plenty of talks on other types of computer modeling, though, from people who model stars, galaxies, or black holes. And from those, I can appreciate what Manabe and Hasselmann did. Based on those talks, I recognize the importance of those first one-dimensional models, a single column of air, especially back in the 60’s when computer power was limited. Even more, I recognize how impressive it is for someone to stay on the forefront of that kind of field, upgrading models for forty years to stay relevant into the 2000’s, as Manabe did. Those talks also taught me about the challenge of coupling different scales: how small effects in churning fluids can add up and affect the simulation, and how hard it is to model different scales at once. To use these effects to discover which models are reliable, as Hasselmann did, is a major accomplishment.