Tag Archives: theoretical physics

The arXiv SciComm Challenge

Fellow science communicators, think you can explain everything that goes on in your field? If so, I have a challenge for you. Pick a day, and go through all the new papers on arXiv.org in a single area. For each one, try to give a general-audience explanation of what the paper is about. To make it easier, you can ignore cross-listed papers. If your field doesn’t use arXiv, consider if you can do the challenge with another appropriate site.

I’ll start. I’m looking at papers in the “High Energy Physics – Theory” area, announced 6 Jan, 2022. I’ll warn you in advance that I haven’t read these papers, just their abstracts, so apologies if I get your paper wrong!

arXiv:2201.01303 : Holographic State Complexity from Group Cohomology

This paper says it is a contribution to a Proceedings. That means it is based on a talk given at a conference. In my field, a talk like this usually won’t be presenting new results, but instead summarizes results in a previous paper. So keep that in mind.

There is an idea in physics called holography, where two theories are secretly the same even though they describe the world with different numbers of dimensions. Usually this involves a gravitational theory in a “box”, and a theory without gravity that describes the sides of the box. The sides turn out to fully describe the inside of the box, much like a hologram looks 3D but can be printed on a flat sheet of paper. Using this idea, physicists have connected some properties of gravity to properties of the theory on the sides of the box. One of those properties is complexity: the complexity of the theory on the sides of the box says something about gravity inside the box, in particular about the size of wormholes. The trouble is, “complexity” is a bit subjective: it’s not clear how to give a good definition for it for this type of theory. In this paper, the author studies a theory with a precise mathematical definition, called a topological theory. This theory turns out to have mathematical properties that suggest a well-defined notion of complexity for it.

arXiv:2201.01393 : Nonrelativistic effective field theories with enhanced symmetries and soft behavior

We sometimes describe quantum field theory as quantum mechanics plus relativity. That’s not quite true though, because it is possible to define a quantum field theory that doesn’t obey special relativity, a non-relativistic theory. Physicists do this if they want to describe a system moving much slower than the speed of light: it gets used sometimes for nuclear physics, and sometimes for modeling colliding black holes.

In particle physics, a “soft” particle is one with almost no momentum. We can classify theories based on how they behave when a particle becomes more and more soft. In normal quantum field theories, if they have special behavior when a particle becomes soft it’s often due to a symmetry of the theory, where the theory looks the same even if something changes. This paper shows that this is not true for non-relativistic theories: they have more requirements to have special soft behavior, not just symmetry. They “bootstrap” a few theories, using some general restrictions to find them without first knowing how they work (“pulling them up by their own bootstraps”), and show that the theories they find are in a certain sense unique, the only theories of that kind.

arXiv:2201.01552 : Transmutation operators and expansions for 1-loop Feynman integrands

In recent years, physicists in my sub-field have found new ways to calculate the probability that particles collide. One of these methods describes ordinary particles in a way resembling string theory, and from this discovered a whole “web” of theories that were linked together by small modifications of the method. This method originally worked only for the simplest Feynman diagrams, the “tree” diagrams that correspond to classical physics, but was extended to the next-simplest diagrams, diagrams with one “loop” that start incorporating quantum effects.

This paper concerns a particular spinoff of this method, that can find relationships between certain one-loop calculations in a particularly efficient way. It lets you express calculations of particle collisions in a variety of theories in terms of collisions in a very simple theory. Unlike the original method, it doesn’t rely on any particular picture of how these collisions work, either Feynman diagrams or strings.

arXiv:2201.01624 : Moduli and Hidden Matter in Heterotic M-Theory with an Anomalous U(1) Hidden Sector

In string theory (and its more sophisticated cousin M theory), our four-dimensional world is described as a world with more dimensions, where the extra dimensions are twisted up so that they cannot be detected. The shape of the extra dimensions influences the kinds of particles we can observe in our world. That shape is described by variables called “moduli”. If those moduli are stable, then the properties of particles we observe would be fixed, otherwise they would not be. In general it is a challenge in string theory to stabilize these moduli and get a world like what we observe.

This paper discusses shapes that give rise to a “hidden sector”, a set of particles that are disconnected from the particles we know so that they are hard to observe. Such particles are often proposed as a possible explanation for dark matter. This paper calculates, for a particular kind of shape, what the masses of different particles are, as well as how different kinds of particles can decay into each other. For example, a particle that causes inflation (the accelerating expansion of the universe) can decay into effects on the moduli and dark matter. The paper also shows how some of the moduli are made stable in this picture.

arXiv:2201.01630 : Chaos in Celestial CFT

One variant of the holography idea I mentioned earlier is called “celestial” holography. In this picture, the sides of the box are an infinite distance away: a “celestial sphere” depicting the angles particles go after they collide, in the same way a star chart depicts the angles between stars. Recent work has shown that there is something like a sensible theory that describes physics on this celestial sphere, that contains all the information about what happens inside.

This paper shows that the celestial theory has a property called quantum chaos. In physics, a theory is said to be chaotic if it depends very precisely on its initial conditions, so that even a small change will result in a large change later (the usual metaphor is a butterfly flapping its wings and causing a hurricane). This kind of behavior appears to be present in this theory.

arXiv:2201.01657 : Calculations of Delbrück scattering to all orders in αZ

Delbrück scattering is an effect where the nuclei of heavy elements like lead can deflect high-energy photons, as a consequence of quantum field theory. This effect is apparently tricky to calculate, and previous calculations have involved approximations. This paper finds a way to calculate the effect without those approximations, which should let it match better with experiments.

(As an aside, I’m a little confused by the claim that they’re going to all orders in αZ when it looks like they just consider one-loop diagrams…but this is probably just my ignorance, this is a corner of the field quite distant from my own.)

arXiv:2201.01674 : On Unfolded Approach To Off-Shell Supersymmetric Models

Supersymmetry is a relationship between two types of particles: fermions, which typically make up matter, and bosons, which are usually associated with forces. In realistic theories this relationship is “broken” and the two types of particles have different properties, but theoretical physicists often study models where supersymmetry is “unbroken” and the two types of particles have the same mass and charge. This paper finds a new way of describing some theories of this kind that reorganizes them in an interesting way, using an “unfolded” approach in which aspects of the particles that would normally be combined are given their own separate variables.

(This is another one I don’t know much about, this is the first time I’d heard of the unfolded approach.)

arXiv:2201.01679 : Geometric Flow of Bubbles

String theorists have conjectured that only some types of theories can be consistently combined with a full theory of quantum gravity, others live in a “swampland” of non-viable theories. One set of conjectures characterizes this swampland in terms of “flows” in which theories with different geometry can flow in to each other. The properties of these flows are supposed to be related to which theories are or are not in the swampland.

This paper writes down equations describing these flows, and applies them to some toy model “bubble” universes.

arXiv:2201.01697 : Graviton scattering amplitudes in first quantisation

This paper is a pedagogical one, introducing graduate students to a topic rather than presenting new research.

Usually in quantum field theory we do something called “second quantization”, thinking about the world not in terms of particles but in terms of fields that fill all of space and time. However, sometimes one can instead use “first quantization”, which is much more similar to ordinary quantum mechanics. There you think of a single particle traveling along a “world-line”, and calculate the probability it interacts with other particles in particular ways. This approach has recently been used to calculate interactions of gravitons, particles related to the gravitational field in the same way photons are related to the electromagnetic field. The approach has some advantages in terms of simplifying the results, which are described in this paper.

Science, Gifts Enough for Lifetimes

Merry Newtonmas, Everyone!

In past years, I’ve compared science to a gift: the ideal gift for the puzzle-fan, one that keeps giving new puzzles. I think people might not appreciate the scale of that gift, though.

Bigger than all the creative commons Wikipedia images

Maybe you’ve heard the old joke that studying for a PhD means learning more and more about less and less until you know absolutely everything about nothing at all. This joke is overstating things: even when you’ve specialized down to nothing at all, you still won’t know everything.

If you read the history of science, it might feel like there are only a few important things going on at a time. You notice the simultaneous discoveries, like calculus from Newton and Liebniz and natural selection from Darwin and Wallace. You can get the impression that everyone was working on a few things, the things that would make it into the textbooks. In fact, though, there was always a lot to research, always many interesting things going on at once. As a scientist, you can’t escape this. Even if you focus on your own little area, on a few topics you care about, even in a small field, there will always be more going on than you can keep up with.

This is especially clear around the holiday season. As everyone tries to get results out before leaving on vacation, there is a tidal wave of new content. I have five papers open on my laptop right now (after closing four or so), and some recorded talks I keep meaning to watch. Two of the papers are the kind of simultaneous discovery I mentioned: two different groups noticing that what might seem like an obvious fact – that in classical physics, unlike in quantum, one can have zero uncertainty – has unexpected implications for our kind of calculations. (A third group got there too, but hasn’t published yet.) It’s a link I would never have expected, and with three groups coming at it independently you’d think it would be the only thing to pay attention to: but even in the same sub-sub-sub-field, there are other things going on that are just as cool! It’s wild, and it’s not some special quirk of my area: that’s science, for all us scientists. No matter how much you expect it to give you, you’ll get more, lifetimes and lifetimes worth. That’s a Newtonmas gift to satisfy anyone.

Calculations of the Past

Last week was a birthday conference for one of the pioneers of my sub-field, Ettore Remiddi. I wasn’t there, but someone who was pointed me to some of the slides, including a talk by Stefano Laporta. For those of you who didn’t see my post a few weeks back, Laporta was one of Remiddi’s students, who developed one of the most important methods in our field and then vanished, spending ten years on an amazingly detailed calculation. Laporta’s talk covers more of the story, about what it was like to do precision calculations in that era.

“That era”, the 90’s through 2000’s, witnessed an enormous speedup in computers, and a corresponding speedup in what was possible. Laporta worked with Remiddi on the three-loop electron anomalous magnetic moment, something Remiddi had been working on since 1969. When Laporta joined in 1989, twenty-one of the seventy-two diagrams needed had still not been computed. They would polish them off over the next seven years, before Laporta dove in to four loops. Twenty years later, he had that four-loop result to over a thousand digits.

One fascinating part of the talk is seeing how the computational techniques change over time, as language replaces language and computer clusters get involved. As a student, Laporta learns a lesson we all often need: that to avoid mistakes, we need to do as little by hand as possible, even for something as simple as copying a one-line formula. Looking at his review of others’ calculations, it’s remarkable how many theoretical results had to be dramatically corrected a few years down the line, and how much still might depend on theoretical precision.

Another theme was one of Remiddi suggesting something and Laporta doing something entirely different, and often much more productive. Whether it was using the arithmetic-geometric mean for an elliptic integral instead of Gaussian quadrature, or coming up with his namesake method, Laporta spent a lot of time going his own way, and Remiddi quickly learned to trust him.

There’s a lot more in the slides that’s worth reading, including a mention of one of this year’s Physics Nobelists. The whole thing is an interesting look at what it takes to press precision to the utmost, and dedicate years to getting something right.

Discovering New Elements, Discovering New Particles

In school, you learn that the world around you is made up of chemical elements. There’s oxygen and nitrogen in the air, hydrogen and oxygen in water, sodium and chlorine in salt, and carbon in all living things. Other elements are more rare. Often, that’s because they’re unstable, due to radioactivity, like the plutonium in a bomb or americium in a smoke detector. The heaviest elements are artificial, produced in tiny amounts by massive experiments. In 2002, the heaviest element yet was found at the Joint Institute for Nuclear Research near Moscow. It was later named Oganesson, after the scientist who figured out how to make these heavy elements, Yuri Oganessian. To keep track of the different elements, we organize them in the periodic table like this:

In that same school, you probably also learn that the elements aren’t quite so elementary. Unlike the atoms imagined by the ancient Greeks, our atoms are made of smaller parts: protons and neutrons, surrounded by a cloud of electrons. They’re what give the periodic table its periodic structure, the way it repeats from row to row, with each different element having a different number of protons.

If your school is a bit more daring, you also learn that protons and neutrons themselves aren’t elementary. Each one is made of smaller particles called quarks: a proton of two “up quarks” and one “down quark”, and a neutron of two “down” and one “up”. Up quarks, down quarks, and electrons are all what physicists call fundamental particles, and they make up everything you see around you. Just like the chemical elements, some fundamental particles are more obscure than others, and the heaviest ones are all very unstable, produced temporarily by particle collider experiments. The most recent particle to be discovered was in 2012, when the Large Hadron Collider near Geneva found the Higgs boson. The Higgs boson is named after Peter Higgs, one of those who predicted it back in the 60’s. All the fundamental particles we know are part of something called the Standard Model, which we sometimes organize in a table like this:

So far, these stories probably sound similar. The experiments might not even sound that different: the Moscow experiment shoots a beam of high-energy calcium atoms at a target of heavy radioactive elements, while the Geneva one shoots a beam of high-energy protons at another beam of high-energy protons. With all those high-energy beams, what’s the difference really?

In fact there is a big different between chemical elements and fundamental particles, and between the periodic table and the Standard Model. The latter are fundamental, the former are not.

When they made new chemical elements, scientists needed to start with a lot of protons and neutrons. That’s why they used calcium atoms in their beam, and even heavier elements as their target. We know that heavy elements are heavy because they contain more protons and neutrons, and we can use the arrangement of those protons and neutrons to try to predict their properties. That’s why, even though only five or six oganesson atoms have been detected, scientists have some idea what kind of material it would make. Oganesson is a noble gas, like helium, neon, and radon. But calculations predict it is actually a solid at room temperature. What’s more, it’s expected to be able to react with other elements, something the other noble gases are very reluctant to do.

The Standard Model has patterns, just like the chemical elements. Each matter particle is one of three “generations”, each heavier and more unstable: for example, electrons have heavier relatives called muons, and still heavier ones called tauons. But unlike with the elements, we don’t know where these patterns come from. We can’t explain them with smaller particles, like we could explain the elements with protons and neutrons. We think the Standard Model particles might actually be fundamental, not made of anything smaller.

That’s why when we make them, we don’t need a lot of other particles: just two protons, each made of three quarks, is enough. With that, we can make not just new arrangements of quarks, but new particles altogether. Some are even heavier than the protons we started with: the Higgs boson is more than a hundred times as heavy as a proton! We can do this because, in particle physics, mass isn’t conserved: mass is just another type of energy, and you can turn one type of energy into another.

Discovering new elements is hard work, but discovering new particles is on another level. It’s hard to calculate which elements are stable or unstable, and what their properties might be. But we know the rules, and with enough skill and time we could figure it out. In particle physics, we don’t know the rules. We have some good guesses, simple models to solve specific problems, and sometimes, like with the Higgs, we’re right. But despite making many more than five or six Higgs bosons, we still aren’t sure it has the properties we expect. We don’t know the rules. Even with skill and time, we can’t just calculate what to expect. We have to discover it.

Searching for Stefano

On Monday, Quanta magazine released an article on a man who transformed the way we do particle physics: Stefano Laporta. I’d tipped them off that Laporta would make a good story: someone who came up with the bread-and-butter algorithm that fuels all of our computations, then vanished from the field for ten years, returning at the end with an 1,100 digit masterpiece. There’s a resemblance to Searching for Sugar Man, fans and supporters baffled that their hero is living in obscurity.

If anything, I worry I under-sold the story. When Quanta interviewed me, it was clear they were looking for ties to well-known particle physics results: was Laporta’s work necessary for the Higgs boson discovery, or linked to the controversy over the magnetic moment of the muon? I was careful, perhaps too careful, in answering. The Higgs, to my understanding, didn’t require so much precision for its discovery. As for the muon, the controversial part is a kind of calculation that wouldn’t use Laporta’s methods, while the un-controversial part was found numerically by a group that doesn’t use his algorithm either.

With more time now, I can make a stronger case. I can trace Laporta’s impact, show who uses his work and for what.

In particle physics, we have a lovely database called INSPIRE that lists all our papers. Here is Laporta’s page, his work sorted by number of citations. When I look today, I find his most cited paper, the one that first presented his algorithm, at the top, with a delightfully apt 1,001 citations. Let’s listen to a few of those 1,001 tales, and see what they tell us.

Once again, we’ll sort by citations. The top paper, “Higgs boson production at hadron colliders in NNLO QCD“, is from 2002. It computes the chance that a particle collider like the LHC could produce a Higgs boson. It in turn has over a thousand citations, headlined by two from the ATLAS and CMS collaborations: “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC” and “Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC“. Those are the papers that announced the discovery of the Higgs, each with more than twelve thousand citations. Later in the list, there are design reports: discussions of why the collider experiments are built a certain way. So while it’s true that the Higgs boson could be seen clearly from the data, Laporta’s work still had a crucial role: with his algorithm, we could reassure experimenters that they really found the Higgs (not something else), and even more importantly, help them design the experiment so that they could detect it.

The next paper tells a similar story. A different calculation, with almost as many citations, feeding again into planning and prediction for collider physics.

The next few touch on my own corner of the field. “New Relations for Gauge-Theory Amplitudes” triggered a major research topic in its own right, one with its own conference series. Meanwhile, “Iteration of planar amplitudes in maximally supersymmetric Yang-Mills theory at three loops and beyond” served as a foundation for my own career, among many others. None of this would have happened without Laporta’s algorithm.

After that, more applications: fundamental quantities for collider physics, pieces of math that are used again and again. In particular, they are referenced again and again by the Particle Data Group, who collect everything we know about particle physics.

Further down still, and we get to specific code: FIRE and Reduze, programs made by others to implement Laporta’s algorithm, each with many uses in its own right.

All that, just from one of Laporta’s papers.

His ten-year magnum opus is more recent, and has fewer citations: checking now, just 139. Still, there are stories to tell there too.

I mentioned earlier 1,100 digits, and this might confuse some of you. The most precise prediction in particle physics has ten digits of precision, the magnetic behavior of the electron. Laporta’s calculation didn’t change that, because what he calculated isn’t the only contribution: he calculated Feynman diagrams with four “loops”, which is its own approximation, one limited in precision by what might be contributed by more loops. The current result has Feynman diagrams with five loops as well (known to much less than 1,100 digits), but the diagrams with six or more are unknown, and can only be estimated. The result also depends on measurements, which themselves can’t reach 1,100 digits of precision.

So why would you want 1,100 digits, then? In a word, mathematics. The calculation involves exotic types of numbers called periods, more complicated cousins of numbers like pi. These numbers are related to each other, often in complicated and surprising ways, ways which are hard to verify without such extreme precision. An older result of Laporta’s inspired the physicist David Broadhurst and mathematician Anton Mellit to conjecture new relations between this type of numbers, relations that were only later proven using cutting-edge mathematics. The new result has inspired mathematicians too: Oliver Schnetz found hints of a kind of “numerical footprint”, special types of numbers tied to the physics of electrons. It’s a topic I’ve investigated myself, something I think could lead to much more efficient particle physics calculations.

In addition to being inspired by Laporta’s work, Broadhurst has advocated for it. He was the one who first brought my attention to Laporta’s story, with a moving description of welcoming him back to the community after his ten-year silence, writing a letter to help him get funding. I don’t have all the details of the situation, but the impression I get is that Laporta had virtually no academic support for those ten years: no salary, no students, having to ask friends elsewhere for access to computer clusters.

When I ask why someone with such an impact didn’t have a professorship, the answer I keep hearing is that he didn’t want to move away from his home town in Bologna. If you aren’t an academic, that won’t sound like much of an explanation: Bologna has a university after all, the oldest in the world. But that isn’t actually a guarantee of anything. Universities hire rarely, according to their own mysterious agenda. I remember another colleague whose wife worked for a big company. They offered her positions in several cities, including New York. They told her that, since New York has many universities, surely her husband could find a job at one of them? We all had a sad chuckle at that.

For almost any profession, a contribution like Laporta’s would let you live anywhere you wanted. That’s not true for academia, and it’s to our loss. By demanding that each scientist be able to pick up and move, we’re cutting talented people out of the field, filtering by traits that have nothing to do with our contributions to knowledge. I don’t know Laporta’s full story. But I do know that doing the work you love in the town you love isn’t some kind of unreasonable request. It’s a request academia should be better at fulfilling.

Don’t Trust the Experiments, Trust the Science

I was chatting with an astronomer recently, and this quote by Arthur Eddington came up:

“Never trust an experimental result until it has been confirmed by theory.”

Arthur Eddington

At first, this sounds like just typical theorist arrogance, thinking we’re better than all those experimentalists. It’s not that, though, or at least not just that. Instead, it’s commenting on a trend that shows up again and again in science, but rarely makes the history books. Again and again an experiment or observation comes through with something fantastical, something that seems like it breaks the laws of physics or throws our best models into disarray. And after a few months, when everyone has checked, it turns out there was a mistake, and the experiment agrees with existing theories after all.

You might remember a recent example, when a lab claimed to have measured neutrinos moving faster than the speed of light, only for it to turn out to be due to a loose cable. Experiments like this aren’t just a result of modern hype: as Eddington’s quote shows, they were also common in his day. In general, Eddington’s advice is good: when an experiment contradicts theory, theory tends to win in the end.

This may sound unscientific: surely we should care only about what we actually observe? If we defer to theory, aren’t we putting dogma ahead of the evidence of our senses? Isn’t that the opposite of good science?

To understand what’s going on here, we can use an old philosophical argument: David Hume’s argument against miracles. David Hume wanted to understand how we use evidence to reason about the world. He argued that, for miracles in particular, we can never have good evidence. In Hume’s definition, a miracle was something that broke the established laws of science. Hume argued that, if you believe you observed a miracle, there are two possibilities: either the laws of science really were broken, or you made a mistake. The thing is, laws of science don’t just come from a textbook: they come from observations as well, many many observations in many different conditions over a long period of time. Some of those observations establish the laws in the first place, others come from the communities that successfully apply them again and again over the years. If your miracle was real, then it would throw into doubt many, if not all, of those observations. So the question you have to ask is: it it more likely those observations were wrong? Or that you made a mistake? Put another way, your evidence is only good enough for a miracle if it would be a bigger miracle if you were wrong.

Hume’s argument always struck me as a little bit too strict: if you rule out miracles like this, you also rule out new theories of science! A more modern approach would use numbers and statistics, weighing the past evidence for a theory against the precision of the new result. Most of the time you’d reach the same conclusion, but sometimes an experiment can be good enough to overthrow a theory.

Still, theory should always sit in the background, a kind of safety net for when your experiments screw up. It does mean that when you don’t have that safety net you need to be extra-careful. Physics is an interesting case of this: while we have “the laws of physics”, we don’t have any established theory that tells us what kinds of particles should exist. That puts physics in an unusual position, and it’s probably part of why we have such strict standards of statistical proof. If you’re going to be operating without the safety net of theory, you need that kind of proof.

This post was also inspired by some biological examples. The examples are politically controversial, so since this is a no-politics blog I won’t discuss them in detail. (I’ll also moderate out any comments that do.) All I’ll say is that I wonder if in that case the right heuristic is this kind of thing: not to “trust scientists” or “trust experts” or even “trust statisticians”, but just to trust the basic, cartoon-level biological theory.

In Uppsala for Elliptics 2021

I’m in Uppsala in Sweden this week, at an actual in-person conference.

With actual blackboards!

Elliptics started out as a series of small meetings of physicists trying to understand how to make sense of elliptic integrals in calculations of colliding particles. It grew into a full-fledged yearly conference series. I organized last year, which naturally was an online conference. This year though, the stage was set for Uppsala University to host in person.

I should say mostly in person. It’s a hybrid conference, with some speakers and attendees joining on Zoom. Some couldn’t make it because of travel restrictions, or just wanted to be cautious about COVID. But seemingly just as many had other reasons, like teaching schedules or just long distances, that kept them from coming in person. We’re all wondering if this will become a long-term trend, where the flexibility of hybrid conferences lets people attend no matter their constraints.

The hybrid format worked better than expected, but there were still a few kinks. The audio was particularly tricky, it seemed like each day the organizers needed a new microphone setup to take questions. It’s always a little harder to understand someone on Zoom, especially when you’re sitting in an auditorium rather than focused on your own screen. Still, technological experience should make this work better in future.

Content-wise, the conference began with a “mini-school” of pedagogical talks on particle physics, string theory, and mathematics. I found the mathematical talks by Erik Panzer particularly nice, it’s a topic I still feel quite weak on and he laid everything out in a very clear way. It seemed like a nice touch to include a “school” element in the conference, though I worry it ate too much into the time.

The rest of the content skewed more mathematical, and more string-theoretic, than these conferences have in the past. The mathematical content ranged from intriguing (including an interesting window into what it takes to get high-quality numerics) to intimidatingly obscure (large commutative diagrams, category theory on the first slide). String theory was arguably under-covered in prior years, but it felt over-covered this year. With the particle physics talks focusing on either general properties with perhaps some connections to elliptics, or to N=4 super Yang-Mills, it felt like we were missing the more “practical” talks from past conferences, where someone was computing something concrete in QCD and told us what they needed. Next year is in Mainz, so maybe those talks will reappear.

Outreach Talk on Math’s Role in Physics

Tonight is “Culture Night” in Copenhagen, the night when the city throws open its doors and lets the public in. Museums and hospitals, government buildings and even the Freemasons, all have public events. The Niels Bohr Institute does too, of course: an evening of physics exhibits and demos, capped off with a public lecture by Denmark’s favorite bow-tie wearing weirder-than-usual string theorist, Holger Bech Nielsen. In between, there are a number of short talks by various folks at the institute, including yours truly.

In my talk, I’m going to try and motivate the audience to care about math. Math is dry of course, and difficult for some, but we physicists need it to do our jobs. If you want to be precise about a claim in physics, you need math simply to say what you want clearly enough.

Since you guys likely don’t overlap with my audience tonight, it should be safe to give a little preview. I’ll be using a few examples, but this one is the most complicated:

I’ll be telling a story I stole from chapter seven of the web serial Almost Nowhere. (That link is to the first chapter, by the way, in case you want to read the series without spoilers. It’s very strange, very unique, and at least in my view quite worth reading.) You follow a warrior carrying a spear around a globe in two different paths. The warrior tries to always point in the same direction, but finds that the two different paths result in different spears when they meet. The story illustrates that such a simple concept as “what direction you are pointing” isn’t actually so simple: if you want to think about directions in curved space (like the surface of the Earth, but also, like curved space-time in general relativity) then you need more sophisticated mathematics (a notion called parallel transport) to make sense of it.

It’s kind of an advanced concept for a public talk. But seeing it show up in Almost Nowhere inspired me to try to get it across. I’ll let you know how it goes!

By the way, if you are interested in learning the kinds of mathematics you need for theoretical physics, and you happen to be a Bachelor’s student planning to pursue a PhD, then consider the Perimeter Scholars International Master’s Program! It’s a one-year intensive at the Perimeter Institute in Waterloo, Ontario, in Canada. In a year it gives you a crash course in theoretical physics, giving you tools that will set you ahead of other beginning PhD students. I’ve witnessed it in action, and it’s really remarkable how much the students learn in a year, and what they go on to do with it. Their early registration deadline is on November 15, just a month away, so if you’re interested you may want to start thinking about it.

Congratulations to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi!

The 2021 Nobel Prize in Physics was announced this week, awarded to Syukuro Manabe and Klaus Hasselmann for climate modeling and Giorgio Parisi for understanding a variety of complex physical systems.

Before this year’s prize was announced, I remember a few “water cooler chats” about who might win. No guess came close, though. The Nobel committee seems to have settled in to a strategy of prizes on a loosely linked “basket” of topics, with half the prize going to a prominent theorist and the other half going to two experimental, observational, or (in this case) computational physicists. It’s still unclear why they’re doing this, but regardless it makes it hard to predict what they’ll do next!

When I read the announcement, my first reaction was, “surely it’s not that Parisi?” Giorgio Parisi is known in my field for the Altarelli-Parisi equations (more properly known as the DGLAP equations, the longer acronym because, as is often the case in physics, the Soviets got there first). These equations are in some sense why the scattering amplitudes I study are ever useful at all. I calculate collisions of individual fundamental particles, like quarks and gluons, but a real particle collider like the LHC collides protons. Protons are messy, interacting combinations of quarks and gluons. When they collide you need not merely the equations describing colliding quarks and gluons, but those that describe their messy dynamics inside the proton, and in particular how those dynamics look different for experiments with different energies. The equation that describes that is the DGLAP equation.

As it turns out, Parisi is known for a lot more than the DGLAP equation. He is best known for his work on “spin glasses”, models of materials where quantum spins try to line up with each other, never quite settling down. He also worked on a variety of other complex systems, including flocks of birds!

I don’t know as much about Manabe and Hasselmann’s work. I’ve only seen a few talks on the details of climate modeling. I’ve seen plenty of talks on other types of computer modeling, though, from people who model stars, galaxies, or black holes. And from those, I can appreciate what Manabe and Hasselmann did. Based on those talks, I recognize the importance of those first one-dimensional models, a single column of air, especially back in the 60’s when computer power was limited. Even more, I recognize how impressive it is for someone to stay on the forefront of that kind of field, upgrading models for forty years to stay relevant into the 2000’s, as Manabe did. Those talks also taught me about the challenge of coupling different scales: how small effects in churning fluids can add up and affect the simulation, and how hard it is to model different scales at once. To use these effects to discover which models are reliable, as Hasselmann did, is a major accomplishment.

Amplitudes 2021 Retrospective


The conference photo

Now that I’ve rested up after this year’s Amplitudes, I’ll give a few of my impressions.

Overall, I think the conference went pretty well. People seemed amused by the digital Niels Bohr, even if he looked a bit like a puppet (Lance compared him to Yoda in his final speech, which was…apt). We used Gather.town, originally just for the poster session and a “virtual reception”, but later we also encouraged people to meet up in it during breaks. That in particular was a big hit: I think people really liked the ability to just move around and chat in impromptu groups, and while nobody seemed to use the “virtual bar”, the “virtual beach” had a lively crowd. Time zones were inevitably rough, but I think we ended up with a good compromise where everyone could still see a meaningful chunk of the conference.

A few things didn’t work as well. For those planning conferences, I would strongly suggest not making a brand new gmail account to send out conference announcements: for a lot of people the emails went straight to spam. Zulip was a bust: I’m not sure if people found it more confusing than last year’s Slack or didn’t notice it due to the spam issue, but almost no-one posted in it. YouTube was complicated: the stream went down a few times and I could never figure out exactly why, it may have just been internet issues here at the Niels Bohr Institute (we did have a power outage one night and had to scramble to get internet access back the next morning). As far as I could tell YouTube wouldn’t let me re-open the previous stream so each time I had to post a new link, which probably was frustrating for those following along there.

That said, this was less of a problem than it might have been, because attendance/”viewership” as a whole was lower than expected. Zoomplitudes last year had massive numbers of people join in both on Zoom and via YouTube. We had a lot fewer: out of over 500 registered participants, we had fewer than 200 on Zoom at any one time, and at most 30 or so on YouTube. Confusion around the conference email might have played a role here, but I suspect part of the difference is simple fatigue: after over a year of this pandemic, online conferences no longer feel like an exciting new experience.

The actual content of the conference ranged pretty widely. Some people reviewed earlier work, others presented recent papers or even work-in-progress. As in recent years, a meaningful chunk of the conference focused on applications of amplitudes techniques to gravitational wave physics. This included a talk by Thibault Damour, who has by now mostly made his peace with the field after his early doubts were sorted out. He still suspected that the mismatch of scales (weak coupling on the one hand, classical scattering on the other) would cause problems in future, but after his work with Laporta and Mastrolia even he had to acknowledge that amplitudes techniques were useful.

In the past I would have put the double-copy and gravitational wave researchers under the same heading, but this year they were quite distinct. While a few of the gravitational wave talks mentioned the double-copy, most of those who brought it up were doing something quite a bit more abstract than gravitational wave physics. Indeed, several people were pushing the boundaries of what it means to double-copy. There were modified KLT kernels, different versions of color-kinematics duality, and explorations of what kinds of massive particles can and (arguably more interestingly) cannot be compatible with a double-copy framework. The sheer range of different generalizations had me briefly wondering whether the double-copy could be “too flexible to be meaningful”, whether the right definitions would let you double-copy anything out of anything. I was reassured by the points where each talk argued that certain things didn’t work: it suggests that wherever this mysterious structure comes from, its powers are limited enough to make it meaningful.

A fair number of talks dealt with what has always been our main application, collider physics. There the context shifted, but the message stayed consistent: for a “clean” enough process two or three-loop calculations can make a big difference, taking a prediction that would be completely off from experiment and bringing it into line. These are more useful the more that can be varied about the calculation: functions are more useful than numbers, for example. I was gratified to hear confirmation that a particular kind of process, where two massless particles like quarks become three massive particles like W or Z bosons, is one of these “clean enough” examples: it means someone will need to compute my “tardigrade” diagram eventually.

If collider physics is our main application, N=4 super Yang-Mills has always been our main toy model. Jaroslav Trnka gave us the details behind Nima’s exciting talk from last year, and Nima had a whole new exciting talk this year with promised connections to category theory (connections he didn’t quite reach after speaking for two and a half hours). Anastasia Volovich presented two distinct methods for predicting square-root symbol letters, while my colleague Chi Zhang showed some exciting progress with the elliptic double-box, realizing the several-year dream of representing it in a useful basis of integrals and showcasing several interesting properties. Anne Spiering came over from the integrability side to show us just how special the “planar” version of the theory really is: by increasing the number of colors of gluons, she showed that one could smoothly go between an “integrability-esque” spectrum and a “chaotic” spectrum. Finally, Lance Dixon mentioned his progress with form-factors in his talk at the end of the conference, showing off some statistics of coefficients of different functions and speculating that machine learning might be able to predict them.

On the more mathematical side, Francis Brown showed us a new way to get numbers out of graphs, one distinct but related to our usual interpretation in terms of Feynman diagrams. I’m still unsure what it will be used for, but the fact that it maps every graph to something finite probably has some interesting implications. Albrecht Klemm and Claude Duhr talked about two sides of the same story, their recent work on integrals involving Calabi-Yau manifolds. They focused on a particular nice set of integrals, and time will tell whether the methods work more broadly, but there are some exciting suggestions that at least parts will.

There’s been a resurgence of the old dream of the S-matrix community, constraining amplitudes via “general constraints” alone, and several talks dealt with those ideas. Sebastian Mizera went the other direction, and tried to test one of those “general constraints”, seeing under which circumstances he could prove that you can swap a particle going in with an antiparticle going out. Others went out to infinity, trying to understand amplitudes from the perspective of the so-called “celestial sphere” where they appear to be governed by conformal field theories of some sort. A few talks dealt with amplitudes in string theory itself: Yvonne Geyer built them out of field-theory amplitudes, while Ashoke Sen explained how to include D-instantons in them.

We also had three “special talks” in the evenings. I’ve mentioned Nima’s already. Zvi Bern gave a retrospective talk that I somewhat cheesily describe as “good for the soul”: a look to the early days of the field that reminded us of why we are who we are. Lance Dixon closed the conference with a light-hearted summary and a look to the future. That future includes next year’s Amplitudes, which after a hasty discussion during this year’s conference has now localized to Prague. Let’s hope it’s in person!