Tag Archives: amplitudes

Amplitudes 2025 This Week

Summer is conference season for academics, and this week held my old sub-field’s big yearly conference, called Amplitudes. This year, it was in Seoul at Seoul National University, the first time the conference has been in Asia.

(I wasn’t there, I don’t go to these anymore. But I’ve been skimming slides in my free time, to give you folks the updates you crave. Be forewarned that conference posts like these get technical fast, I’ll be back to my usual accessible self next week.)

There isn’t a huge amplitudes community in Korea, but it’s bigger than it was back when I got started in the field. Of the organizers, Kanghoon Lee of the Asia Pacific Center for Theoretical Physics and Sangmin Lee of Seoul National University have what I think of as “core amplitudes interests”, like recursion relations and the double-copy. The other Korean organizers are from adjacent areas, work that overlaps with amplitudes but doesn’t show up at the conference each year. There was also a sizeable group of organizers from Taiwan, where there has been a significant amplitudes presence for some time now. I do wonder if Korea was chosen as a compromise between a conference hosted in Taiwan or in mainland China, where there is also quite a substantial amplitudes community.

One thing that impresses me every year is how big, and how sophisticated, the gravitational-wave community in amplitudes has grown. Federico Buccioni’s talk began with a plot that illustrates this well (though that wasn’t his goal):

At the conference Amplitudes, dedicated to the topic of scattering amplitudes, there were almost as many talks with the phrase “black hole” in the title as there were with “scattering” or “amplitudes”! This is for a topic that did not even exist in the subfield when I got my PhD eleven years ago.

With that said, gravitational wave astronomy wasn’t quite as dominant at the conference as Buccioni’s bar chart suggests. There were a few talks each day on the topic: I counted seven in total, excluding any short talks on the subject in the gong show. Spinning black holes were a significant focus, central to Jung-Wook Kim’s, Andres Luna’s and Mao Zeng’s talks (the latter two showing some interesting links between the amplitudes story and classic ideas in classical mechanics) and relevant in several others, with Riccardo Gonzo, Miguel Correia, Ira Rothstein, and Enrico Herrmann’s talks showing not just a wide range of approaches, but an increasing depth of research in this area.

Herrmann’s talk in particular dealt with detector event shapes, a framework that lets physicists think more directly about what a specific particle detector or observer can see. He applied the idea not just to gravitational waves but to quantum gravity and collider physics as well. The latter is historically where this idea has been applied the most thoroughly, as highlighted in Hua Xing Zhu’s talk, where he used them to pick out particular phenomena of interest in QCD.

QCD is, of course, always of interest in the amplitudes field. Buccioni’s talk dealt with the theory’s behavior at high-energies, with a nice example of the “maximal transcendentality principle” where some quantities in QCD are identical to quantities in N=4 super Yang-Mills in the “most transcendental” pieces (loosely, those with the highest powers of pi). Andrea Guerreri’s talk also dealt with high-energy behavior in QCD, trying to address an experimental puzzle where QCD results appeared to violate a fundamental bound all sensible theories were expected to obey. By using S-matrix bootstrap techniques, they clarify the nature of the bound, finding that QCD still obeys it once correctly understood, and conjecture a weird theory that should be possible to frame right on the edge of the bound. The S-matrix bootstrap was also used by Alexandre Homrich, who talked about getting the framework to work for multi-particle scattering.

Heribertus Bayu Hartanto is another recent addition to Korea’s amplitudes community. He talked about a concrete calculation, two-loop five-particle scattering including top quarks, a tricky case that includes elliptic curves.

When amplitudes lead to integrals involving elliptic curves, many standard methods fail. Jake Bourjaily’s talk raised a question he has brought up again and again: what does it mean to do an integral for a new type of function? One possible answer is that it depends on what kind of numerics you can do, and since more general numerical methods can be cumbersome one often needs to understand the new type of function in more detail. In light of that, Stephen Jones’ talk was interesting in taking a common problem often cited with generic approaches (that they have trouble with the complex numbers introduced by Minkowski space) and finding a more natural way in a particular generic approach (sector decomposition) to take them into account. Giulio Salvatori talked about a much less conventional numerical method, linked to the latest trend in Nima-ology, surfaceology. One of the big selling points of the surface integral framework promoted by people like Salvatori and Nima Arkani-Hamed is that it’s supposed to give a clear integral to do for each scattering amplitude, one which should be amenable to a numerical treatment recently developed by Michael Borinsky. Salvatori can currently apply the method only to a toy model (up to ten loops!), but he has some ideas for how to generalize it, which will require handling divergences and numerators.

Other approaches to the “problem of integration” included Anna-Laura Sattelberger’s talk that presented a method to find differential equations for the kind of integrals that show up in amplitudes using the mathematical software Macaulay2, including presenting a package. Matthias Wilhelm talked about the work I did with him, using machine learning to find better methods for solving integrals with integration-by-parts, an area where two other groups have now also published. Pierpaolo Mastrolia talked about integration-by-parts’ up-and-coming contender, intersection theory, a method which appears to be delving into more mathematical tools in an effort to catch up with its competitor.

Sometimes, one is more specifically interested in the singularities of integrals than their numerics more generally. Felix Tellander talked about a geometric method to pin these down which largely went over my head, but he did have a very nice short description of the approach: “Describe the singularities of the integrand. Find a map representing integration. Map the singularities of the integrand onto the singularities of the integral.”

While QCD and gravity are the applications of choice, amplitudes methods germinate in N=4 super Yang-Mills. Ruth Britto’s talk opened the conference with an overview of progress along those lines before going into her own recent work with one-loop integrals and interesting implications of ideas from cluster algebras. Cluster algebras made appearances in several other talks, including Anastasia Volovich’s talk which discussed how ideas from that corner called flag cluster algebras may give insights into QCD amplitudes, though some symbol letters still seem to be hard to track down. Matteo Parisi covered another idea, cluster promotion maps, which he thinks may help pin down algebraic symbol letters.

The link between cluster algebras and symbol letters is an ongoing mystery where the field is seeing progress. Another symbol letter mystery is antipodal duality, where flipping an amplitude like a palindrome somehow gives another valid amplitude. Lance Dixon has made progress in understanding where this duality comes from, finding a toy model where it can be understood and proved.

Others pushed the boundaries of methods specific to N=4 super Yang-Mills, looking for novel structures. Song He’s talk pushes an older approach by Bourjaily and collaborators up to twelve loops, finding new patterns and connections to other theories and observables. Qinglin Yang bootstraps Wilson loops with a Lagrangian insertion, adding a side to the polygon used in previous efforts and finding that, much like when you add particles to amplitudes in a bootstrap, the method gets stricter and more powerful. Jaroslav Trnka talked about work he has been doing with “negative geometries”, an odd method descended from the amplituhedron that looks at amplitudes from a totally different perspective, probing a bit of their non-perturbative data. He’s finding more parts of that setup that can be accessed and re-summed, finding interestingly that multiple-zeta-values show up in quantities where we know they ultimately cancel out. Livia Ferro also talked about a descendant of the amplituhedron, this time for cosmology, getting differential equations for cosmological observables in a particular theory from a combinatorial approach.

Outside of everybody’s favorite theories, some speakers talked about more general approaches to understanding the differences between theories. Andreas Helset covered work on the geometry of the space of quantum fields in a theory, applying the method to a general framework for characterizing deviations from the standard model called the SMEFT. Jasper Roosmale Nepveu also talked about a general space of theories, thinking about how positivity (a trait linked to fundamental constraints like causality and unitarity) gets tangled up with loop effects, and the implications this has for renormalization.

Soft theorems, universal behavior of amplitudes when a particle has low energy, continue to be a trendy topic, with Silvia Nagy showing how the story continues to higher orders and Sangmin Choi investigating loop effects. Callum Jones talks about one of the more powerful results from the soft limit, Weinberg’s theorem showing the uniqueness of gravity. Weinberg’s proof was set up in Minkowski space, but we may ultimately live in curved, de Sitter space. Jones showed how the ideas Weinberg explored generalize in de Sitter, using some tools from the soft-theorem-inspired field of dS/CFT. Julio Parra-Martinez, meanwhile, tied soft theorems to another trendy topic, higher symmetries, a more general notion of the usual types of symmetries that physicists have explored in the past. Lucia Cordova reported work that was not particularly connected to soft theorems but was connected to these higher symmetries, showing how they interact with crossing symmetry and the S-matrix bootstrap.

Finally, a surprisingly large number of talks linked to Kevin Costello and Natalie Paquette’s work with self-dual gauge theories, where they found exact solutions from a fairly mathy angle. Paquette gave an update on her work on the topic, while Alfredo Guevara talked about applications to black holes, comparing the power of expanding around a self-dual gauge theory to that of working with supersymmetry. Atul Sharma looked at scattering in self-dual backgrounds in work that merges older twistor space ideas with the new approach, while Roland Bittelson talked about calculating around an instanton background.


Also, I had another piece up this week at FirstPrinciples, based on an interview with the (outgoing) president of the Sloan Foundation. I won’t have a “bonus info” post for this one, as most of what I learned went into the piece. But if you don’t know what the Sloan Foundation does, take a look! I hadn’t known they funded Jupyter notebooks and Hidden Figures, or that they introduced Kahneman and Tversky.

Integration by Parts, Evolved

I posted what may be my last academic paper today, about a project I’ve been working on with Matthias Wilhelm for most of the last year. The paper is now online here. For me, the project has been a chance to broaden my horizons, learn new skills, and start to step out of my academic comfort zone. For Matthias, I hope it was grant money well spent.

I wanted to work on something related to machine learning, for the usual trendy employability reasons. Matthias was already working with machine learning, but was interested in pursuing a different question.

When is machine learning worthwhile? Machine learning methods are heuristics, unreliable methods that sometimes work well. You don’t use a heuristic if you have a reliable method that runs fast enough. But if all you have are heuristics to begin with, then machine learning can give you a better heuristic.

Matthias noticed a heuristic embedded deep in how we do particle physics, and guessed that we could do better. In particle physics, we use pictures called Feynman diagrams to predict the probabilities for different outcomes of collisions, comparing those predictions to observation to look for evidence of new physics. Each Feynman diagram corresponds to an integral, and for each calculation there are hundreds, thousands, or even millions of those integrals to do.

Luckily, physicists don’t actually have to do all those integrals. It turns out that most of them are related, by a slightly more advanced version of that calculus class mainstay, integration by parts. Using integration by parts you can solve a list of equations, finding out how to write your integrals in terms of a much smaller list.

How big a list of equations do you need, and which ones? Twenty-five years ago, Stefano Laporta proposed a “golden rule” to choose, based on his own experience, and people have been using it (more or less, with their own tweaks) since then.

Laporta’s rule is a heuristic, with no proof that it is the best option, or even that it will always work. So we probably shouldn’t have been surprised when someone came up with a better heuristic. Watching talks at a December 2023 conference, Matthias saw a presentation by Johann Usovitsch on a curious new rule. The rule was surprisingly simple, just one extra condition on top of Laporta’s. But it was enough to reduce the number of equations by a factor of twenty.

That’s great progress, but it’s also a bit frustrating. Over almost twenty-five years, no-one had guessed this one simple change?

Maybe, thought Matthias and I, we need to get better at guessing.

We started out thinking we’d try reinforcement learning, a technique where a machine is trained by playing a game again and again, changing its strategy when that strategy brings it a reward. We thought we could have the machine learn to cut away extra equations, getting rewarded if it could cut more while still getting the right answer. We didn’t end up pursuing this very far before realizing another strategy would be a better fit.

What is a rule, but a program? Laporta’s golden rule and Johann’s new rule could both be expressed as simple programs. So we decided to use a method that could guess programs.

One method stood out for sheer trendiness and audacity: FunSearch. FunSearch is a type of algorithm called a genetic algorithm, which tries to mimic evolution. It makes a population of different programs, “breeds” them with each other to create new programs, and periodically selects out the ones that perform best. That’s not the trendy or audacious part, though, people have been doing that sort of genetic programming for a long time.

The trendy, audacious part is that FunSearch generates these programs with a Large Language Model, or LLM (the type of technology behind ChatGPT). Using an LLM trained to complete code, FunSearch presents the model with two programs labeled v0 and v1 and asks it to complete v2. In general, program v2 will have some traits from v0 and v1, but also a lot of variation due to the unpredictable output of LLMs. The inventors of FunSearch used this to contribute the variation needed for evolution, using it to evolve programs to find better solutions to math problems.

We decided to try FunSearch on our problem, modifying it a bit to fit the case. We asked it to find a shorter list of equations, giving a better score for a shorter list but a penalty if the list wasn’t able to solve the problem fully.

Some tinkering and headaches later, it worked! After a few days and thousands of program guesses, FunSearch was able to find a program that reproduced the new rule Johann had presented. A few hours more, and it even found a rule that was slightly better!

But then we started wondering: do we actually need days of GPU time to do this?

An expert on heuristics we knew had insisted, at the beginning, that we try something simpler. The approach we tried then didn’t work. But after running into some people using genetic programming at a conference last year, we decided to try again, using a Python package they used in their work. This time, it worked like a charm, taking hours rather than days to find good rules.

This was all pretty cool, a great opportunity for me to cut my teeth on Python programming and its various attendant skills. And it’s been inspiring, with Matthias drawing together more people interested in seeing just how much these kinds of heuristic methods can do there. I should be clear though, that so far I don’t think our result is useful. We did better than the state of the art on an example, but only slightly, and in a way that I’d guess doesn’t generalize. And we needed quite a bit of overhead to do it. Ultimately, while I suspect there’s something useful to find in this direction, it’s going to require more collaboration, both with people using the existing methods who know better what the bottlenecks are, and with experts in these, and other, kinds of heuristics.

So I’m curious to see what the future holds. And for the moment, happy that I got to try this out!

How Small Scales Can Matter for Large Scales

For a certain type of physicist, nothing matters more than finding the ultimate laws of nature for its tiniest building-blocks, the rules that govern quantum gravity and tell us where the other laws of physics come from. But because they know very little about those laws at this point, they can predict almost nothing about observations on the larger distance scales we can actually measure.

“Almost nothing” isn’t nothing, though. Theoretical physicists don’t know nature’s ultimate laws. But some things about them can be reasonably guessed. The ultimate laws should include a theory of quantum gravity. They should explain at least some of what we see in particle physics now, explaining why different particles have different masses in terms of a simpler theory. And they should “make sense”, respecting cause and effect, the laws of probability, and Einstein’s overall picture of space and time.

All of these are assumptions, of course. Further assumptions are needed to derive any testable consequences from them. But a few communities in theoretical physics are willing to take the plunge, and see what consequences their assumptions have.

First, there’s the Swampland. String theorists posit that the world has extra dimensions, which can be curled up in a variety of ways to hide from view, with different observable consequences depending on how the dimensions are curled up. This list of different observable consequences is referred to as the Landscape of possibilities. Based on that, some string theorists coined the term “Swampland” to represent an area outside the Landscape, containing observations that are incompatible with quantum gravity altogether, and tried to figure out what those observations would be.

In principle, the Swampland includes the work of all the other communities on this list, since a theory of quantum gravity ought to be consistent with other principles as well. In practice, people who use the term focus on consequences of gravity in particular. The earliest such ideas argued from thought experiments with black holes, finding results that seemed to demand that gravity be the weakest force for at least one type of particle. Later researchers would more frequently use string theory as an example, looking at what kinds of constructions people had been able to make in the Landscape to guess what might lie outside of it. They’ve used this to argue that dark energy might be temporary, and to try to figure out what traits new particles might have.

Second, I should mention naturalness. When talking about naturalness, people often use the analogy of a pen balanced on its tip. While possible in principle, it must have been set up almost perfectly, since any small imbalance would cause it to topple, and that perfection demands an explanation. Similarly, in particle physics, things like the mass of the Higgs boson and the strength of dark energy seem to be carefully balanced, so that a small change in how they were set up would lead to a much heavier Higgs boson or much stronger dark energy. The need for an explanation for the Higgs’ careful balance is why many physicists expected the Large Hadron Collider to discover additional new particles.

As I’ve argued before, this kind of argument rests on assumptions about the fundamental laws of physics. It assumes that the fundamental laws explain the mass of the Higgs, not merely by giving it an arbitrary number but by showing how that number comes from a non-arbitrary physical process. It also assumes that we understand well how physical processes like that work, and what kinds of numbers they can give. That’s why I think of naturalness as a type of argument, much like the Swampland, that uses the smallest scales to constrain larger ones.

Third is a host of constraints that usually go together: causality, unitarity, and positivity. Causality comes from cause and effect in a relativistic universe. Because two distant events can appear to happen in different orders depending on how fast you’re going, any way to send signals faster than light is also a way to send signals back in time, causing all of the paradoxes familiar from science fiction. Unitarity comes from quantum mechanics. If quantum calculations are supposed to give the probability of things happening, those probabilities should make sense as probabilities: for example, they should never go above one.

You might guess that almost any theory would satisfy these constraints. But if you extend a theory to the smallest scales, some theories that otherwise seem sensible end up failing this test. Actually linking things up takes other conjectures about the mathematical form theories can have, conjectures that seem more solid than the ones underlying Swampland and naturalness constraints but that still can’t be conclusively proven. If you trust the conjectures, you can derive restrictions, often called positivity constraints when they demand that some set of observations is positive. There has been a renaissance in this kind of research over the last few years, including arguments that certain speculative theories of gravity can’t actually work.

Replacing Space-Time With the Space in Your Eyes

Nima Arkani-Hamed thinks space-time is doomed.

That doesn’t mean he thinks it’s about to be destroyed by a supervillain. Rather, Nima, like many physicists, thinks that space and time are just approximations to a deeper reality. In order to make sense of gravity in a quantum world, seemingly fundamental ideas, like that particles move through particular places at particular times, will probably need to become more flexible.

But while most people who think space-time is doomed research quantum gravity, Nima’s path is different. Nima has been studying scattering amplitudes, formulas used by particle physicists to predict how likely particles are to collide in particular ways. He has been trying to find ways to calculate these scattering amplitudes without referring directly to particles traveling through space and time. In the long run, the hope is that knowing how to do these calculations will help suggest new theories beyond particle physics, theories that can’t be described with space and time at all.

Ten years ago, Nima figured out how to do this in a particular theory, one that doesn’t describe the real world. For that theory he was able to find a new picture of how to calculate scattering amplitudes based on a combinatorical, geometric space with no reference to particles traveling through space-time. He gave this space the catchy name “the amplituhedron“. In the years since, he found a few other “hedra” describing different theories.

Now, he’s got a new approach. The new approach doesn’t have the same kind of catchy name: people sometimes call it surfaceology, or curve integral formalism. Like the amplituhedron, it involves concepts from combinatorics and geometry. It isn’t quite as “pure” as the amplituhedron: it uses a bit more from ordinary particle physics, and while it avoids specific paths in space-time it does care about the shape of those paths. Still, it has one big advantage: unlike the amplituhedron, Nima’s new approach looks like it can work for at least a few of the theories that actually describe the real world.

The amplituhedron was mysterious. Instead of space and time, it described the world in terms of a geometric space whose meaning was unclear. Nima’s new approach also describes the world in terms of a geometric space, but this space’s meaning is a lot more clear.

The space is called “kinematic space”. That probably still sounds mysterious. “Kinematic” in physics refers to motion. In the beginning of a physics class when you study velocity and acceleration before you’ve introduced a single force, you’re studying kinematics. In particle physics, kinematic refers to the motion of the particles you detect. If you see an electron going up and to the right at a tenth the speed of light, those are its kinematics.

Kinematic space, then, is the space of observations. By saying that his approach is based on ideas in kinematic space, what Nima is saying is that it describes colliding particles not based on what they might be doing before they’re detected, but on mathematics that asks questions only about facts about the particles that can be observed.

(For the experts: this isn’t quite true, because he still needs a concept of loop momenta. He’s getting the actual integrands from his approach, rather than the dual definition he got from the amplituhedron. But he does still have to integrate one way or another.)

Quantum mechanics famously has many interpretations. In my experience, Nima’s favorite interpretation is the one known as “shut up and calculate”. Instead of arguing about the nature of an indeterminately philosophical “real world”, Nima thinks quantum physics is a tool to calculate things people can observe in experiments, and that’s the part we should care about.

From a practical perspective, I agree with him. And I think if you have this perspective, then ultimately, kinematic space is where your theories have to live. Kinematic space is nothing more or less than the space of observations, the space defined by where things land in your detectors, or if you’re a human and not a collider, in your eyes. If you want to strip away all the speculation about the nature of reality, this is all that is left over. Any theory, of any reality, will have to be described in this way. So if you think reality might need a totally new weird theory, it makes sense to approach things like Nima does, and start with the one thing that will always remain: observations.

Amplitudes 2024, Continued

I’ve now had time to look over the rest of the slides from the Amplitudes 2024 conference, so I can say something about Thursday and Friday’s talks.

Thursday was gravity-focused. Zvi Bern’s review talk was actually a review, a tour of the state of the art in using amplitudes techniques to make predictions for gravitational wave physics. Bern emphasized that future experiments will require much more precision: two more orders of magnitude, which in our lingo amounts to two more “loops”. The current state of the art is three loops, but they’ve been hacking away at four, doing things piece by piece in a way that cleverly also yields publications (for example, they can do just the integrals needed for supergravity, which are simpler). Four loops here is the first time that the Feynman diagrams involve Calabi-Yau manifolds, so they will likely need techniques from some of the folks I talked about last week. Once they have four loops, they’ll want to go to five, since that is the level of precision you need to learn something about the material in neutron stars. The talk covered a variety of other developments, some of which were talked about later on Thursday and some of which were only mentioned here.

Of that day’s other speakers, Stefano De Angelis, Lucile Cangemi, Mikhail Ivanov, and Alessandra Buonanno also focused on gravitational waves. De Angelis talked about the subtleties that show up when you try to calculate gravitational waveforms directly with amplitudes methods, showcasing various improvements to the pipeline there. Cangemi talked about a recurring question with its own list of subtleties, namely how the Kerr metric for spinning black holes emerges from the math of amplitudes of spinning particles. Gravitational waves were the focus of only the second half of Ivanov’s talk, where he talked about how amplitudes methods can clear up some of the subtler effects people try to take into account. The first half was about another gravitational application, that of using amplitudes methods to compute the correlations of galaxy structures in the sky, a field where it looks like a lot of progress can be made. Finally, Buonanno gave the kind of talk she’s given a few times at these conferences, a talk that puts these methods in context, explaining how amplitudes results are packaged with other types of calculations into the Effective-One-Body framework which then is more directly used at LIGO. This year’s talk went into more detail about what the predictions are actually used for, which I appreciated. I hadn’t realized that there have been a handful of black hole collisions discovered by other groups from LIGO’s data, a win for open science! Her slides had a nice diagram explaining what data from the gravitational wave is used to infer what black hole properties, quite a bit more organized than the statistical template-matching I was imagining. She explained the logic behind Bern’s statement that gravitational wave telescopes will need two more orders of magnitude, pointing out that that kind of precision is necessary to be sure that something that might appear to be a deviation from Einstein’s theory of gravity is not actually a subtle effect of known physics. Her method typically is adjusted to fit numerical simulations, but she shows that even without that adjustment they now fit the numerics quite well, thanks in part to contributions from amplitudes calculations.

Of the other talks that day, David Kosower’s was the only one that didn’t explicitly involve gravity. Instead, his talk focused on a more general question, namely how to find a well-defined basis of integrals for Feynman diagrams, which turns out to involve some rather subtle mathematics and geometry. This is a topic that my former boss Jake Bourjaily worked on in a different context for some time, and I’m curious whether there is any connection between the two approaches. Oliver Schlotterer gave the day’s second review talk, once again of the “actually a review” kind, covering a variety of recent developments in string theory amplitudes. These include some new pictures of how string theory amplitudes that correspond to Yang-Mills theories “square” to amplitudes involving gravity at higher loops and progress towards going past two loops, the current state of the art for most string amplitude calculations. (For the experts: this does not involve taking the final integral over the moduli space, which is still a big unsolved problem.) He also talked about progress by Sebastian Mizera and collaborators in understanding how the integrals that show up in string theory make sense in the complex plane. This is a problem that people had mostly managed to avoid dealing with because of certain simplifications in the calculations people typically did (no moduli space integration, expansion in the string length), but taking things seriously means confronting it, and Mizera and collaborators found a novel solution to the problem that has already passed a lot of checks. Finally, Tobias Hansen’s talk also related to string theory, specifically in anti-de-Sitter space, where the duality between string theory and N=4 super Yang-Mills lets him and his collaborators do Yang-Mills calculations and see markedly stringy-looking behavior.

Friday began with Kevin Costello, whose not-really-a-review talk dealt with his work with Natalie Paquette showing that one can use an exactly-solvable system to learn something about QCD. This only works for certain rather specific combinations of particles: for example, in order to have three colors of quarks, they need to do the calculation for nine flavors. Still, they managed to do a calculation with this method that had not previously been done with more traditional means, and to me it’s impressive that anything like this works for a theory without supersymmetry. Mina Himwich and Diksha Jain both had talks related to a topic of current interest, “celestial” conformal field theory, a picture that tries to apply ideas from holography in which a theory on the boundary of a space fully describes the interior, to the “boundary” of flat space, infinitely far away. Himwich talked about a symmetry observed in that research program, and how that symmetry can be seen using more normal methods, which also lead to some suggestions of how the idea might be generalized. Jain likewise covered a different approach, one in which one sets artificial boundaries in flat space and sees what happens when those boundaries move.

Yifei He described progress in the modern S-matrix bootstrap approach. Previously, this approach had gotten quite general constraints on amplitudes. She tries to do something more specific, and predict the S-matrix for scattering of pions in the real world. By imposing compatibility with knowledge from low energies and high energies, she was able to find a much more restricted space of consistent S-matrices, and these turn out to actually match pretty well to experimental results. Mathieu Giroux addresses an important question for a variety of parts of amplitudes research, how to predict the singularities of Feynman diagrams. He explored a recursive approach to solving Landau’s equations for these singularities, one which seems impressively powerful, in one case being able to find a solution that in text form is approximately the length of Harry Potter. Finally, Juan Maldacena closed the conference by talking about some progress he’s made towards an old idea, that of defining M theory in terms of a theory involving actual matrices. This is a very challenging thing to do, but he is at least able to tackle the simplest possible case, involving correlations between three observations. This had a known answer, so his work serves mostly as a confirmation that the original idea makes sense at at least this level.

Beyond Elliptic Polylogarithms in Oaxaca

Arguably my biggest project over the last two years wasn’t a scientific paper, a journalistic article, or even a grant application. It was a conference.

Most of the time, when scientists organize a conference, they do it “at home”. Either they host the conference at their own university, or rent out a nearby event venue. There is an alternative, though. Scattered around the world, often in out-of-the way locations, are places dedicated to hosting scientific conferences. These places accept applications each year from scientists arguing that their conference would best serve the place’s scientific mission.

One of these places is the Banff International Research Station in Alberta, Canada. Since 2001, Banff has been hosting gatherings of mathematicians from around the world, letting them focus on their research in an idyllic Canadian ski resort.

If you don’t like skiing, though, Banff still has you covered! They have “affiliate centers” elsewhere, with one elsewhere in Canada, one in China, two on the way in India and Spain…and one, that particularly caught my interest, in Oaxaca, Mexico.

Back around this time of year in 2022, I started putting a proposal together for a conference at the Casa Mathemática Oaxaca. The idea would be a conference discussing the frontier of the field, how to express the strange mathematical functions that live in Feynman diagrams. I assembled a big team of co-organizers, five in total. At the time, I wasn’t sure whether I could find a permanent academic job, so I wanted to make sure there were enough people involved that they could run the conference without me.

Followers of the blog know I did end up finding that permanent job…only to give it up. In the end, I wasn’t able to make it to the conference. But my four co-organizers were (modulo some delays in the Houston airport). The conference was this week, with the last few talks happening over the next few hours.

I gave a short speech via Zoom at the beginning of the conference, a mix of welcome and goodbye. Since then I haven’t had the time to tune in to the talks, but they’re good folks and I suspect they’re having good discussions.

I do regret that, near the end, I wasn’t able to give the conference the focus it deserved. There were people we really hoped to have, but who couldn’t afford the travel. I’d hoped to find a source of funding that could support them, but the plan fell through. The week after Amplitudes 2024 was also a rough time to have a conference in this field, with many people who would have attended not able to go to both. (At least they weren’t the same week, thanks to some flexibility on the part of the Amplitudes organizers!)

Still, it’s nice to see something I’ve been working on for two years finally come to pass, to hopefully stir up conversations between different communities and give various researchers a taste of one of Mexico’s most beautiful places. I still haven’t been to Oaxaca yet, but I suspect I will eventually. Danish companies do give at minimum five weeks of holiday per year, so I should get a chance at some point.

(Not At) Amplitudes 2024 at the IAS

For over a decade, I studied scattering amplitudes, the formulas particle physicists use to find the probability that particles collide, or scatter, in different ways. I went to Amplitudes, the field’s big yearly conference, every year from 2015 to 2023.

This year is different. I’m on the way out of the field, looking for my next steps. Meanwhile, Amplitudes 2024 is going full speed ahead at the Institute for Advanced Study in Princeton.

With poster art that is, as the kids probably don’t say anymore, “on fleek”

The talks aren’t live-streamed this year, but they are posting slides, and they will be posting recordings. Since a few of my readers are interested in new amplitudes developments, I’ve been paging through the posted slides looking for interesting highlights. So far, I’ve only seen slides from the first few days: I will probably write about the later talks in a future post.

Each day of Amplitudes this year has two 45-minute “review talks”, one first thing in the morning and the other first thing after lunch. I put “review talks” in quotes because they vary a lot, between talks that try to introduce a topic for the rest of the conference to talks that mostly focus on the speaker’s own research. Lorenzo Tancredi’s talk was of the former type, an introduction to the many steps that go into making predictions for the LHC, with a focus on those topics where amplitudeologists have made progress. The talk opens with the type of motivation I’d been writing in grant and job applications over the last few years (we don’t know most of the properties of the Higgs yet! To measure them, we’ll need to calculate amplitudes with massive particles to high precision!), before moving into a review of the challenges and approaches in different steps of these calculations. While Tancredi apologizes in advance that the talk may be biased, I found it surprisingly complete: if you want to get an idea of the current state of the “LHC amplitudes pipeline”, his slides are a good place to start.

Tancredi’s talk serves as introduction for a variety of LHC-focused talks, some later that day and some later in the week. Federica Devoto discussed high-energy quarks while Chiara Signorile-Signorile and George Sterman showed advances in handling of low-energy particles. Xiaofeng Xu has a program that helps predict symbol letters, the building-blocks of scattering amplitudes that can be used to reconstruct or build up the whole thing, while Samuel Abreu talked about a tricky state-of-the-art case where Xu’s program misses part of the answer.

Later Monday morning veered away from the LHC to focus on more toy-model theories. Renata Kallosh’s talk in particular caught my attention. This blog is named after a long-standing question in amplitudes: will the four-graviton amplitude in N=8 supergravity diverge at seven loops in four dimensions? This seemingly arcane question is deep down a question about what is actually required for a successful theory of quantum gravity, and in particular whether some of the virtues of string theory can be captured by a simpler theory instead. Answering the question requires a prodigious calculation, and the more “loops” are involved the more difficult it is. Six years ago, the calculation got to five loops, and it hasn’t passed that mark since then. That five-loop calculation gave some reason for pessimism, a nice pattern at lower loops that stopped applying at five.

Kallosh thinks she has an idea of what to expect. She’s noticed a symmetry in supergravity, one that hadn’t previously been taken into account. She thinks that symmetry should keep N=8 supergravity from diverging on schedule…but only in exactly four dimensions. All of the lower-loop calculations in N=8 supergravity diverged in higher dimensions than four, and it seems like with this new symmetry she understands why. Her suggestion is to focus on other four-dimensional calculations. If seven loops is still too hard, then dialing back the amount of supersymmetry from N=8 to something lower should let her confirm her suspicions. Already a while back N=5 supergravity was found to diverge later than expected in four dimensions. She wants to know whether that pattern continues.

(Her backup slides also have a fun historical point: in dimensions greater than four, you can’t get elliptical planetary orbits. So four dimensions is special for our style of life.)

Other talks on Monday included a talk by Zahra Zahraee on progress towards “solving” the field’s favorite toy model, N=4 super Yang-Mills. Christian Copetti talked about the work I mentioned here, while Meta employee François Charlton’s “review talk” dealt with his work applying machine learning techniques to “translate” between questions in mathematics and their answers. In particular, he reported progress with my current boss Matthias Wilhelm and frequent collaborator and mentor Lance Dixon on using transformers to guess high-loop formulas in N=4 super Yang-Mills. They have an interesting proof of principle now, but it will probably still be a while until they can use the method to predict something beyond the state of the art.

In the meantime at least they have some hilarious AI-generated images

Tuesday’s review by Ian Moult was genuinely a review, but of a topic not otherwise covered at the conference, that of “detector observables”. The idea is that rather than talking about which individual particles are detected, one can ask questions that make more sense in terms of the experimental setup, like asking about the amounts of energy deposited in different detectors. This type of story has gone from an idle observation by theorists to a full research program, with theorists and experimentalists in active dialogue.

Natalia Toro brought up that, while we say each particle has a definite spin, that may not actually be the case. Particles with so-called “continuous spins” can masquerade as particles with a definite integer spin at lower energies. Toro and Schuster promoted this view of particles ten years ago, but now can make a bit more sense of it, including understanding how continuous-spin particles can interact.

The rest of Tuesday continued to be a bit of a grab-bag. Yael Shadmi talked about applying amplitudes techniques to Effective Field Theory calculations, while Franziska Porkert talked about a Feynman diagram involving two different elliptic curves. Interestingly (well, to me at least), the curves never appear “together”, you can represent the diagram as a sum of terms involving one curve and terms involving the other, much simpler than it could have been!

Tuesday afternoon’s review talk by Iain Stewart was one of those “guest from an adjacent field” talks, in this case from an approach called SCET, and at first glance didn’t seem to do much to reach out to the non-SCET people in the audience. Frequent past collaborator of mine Andrew McLeod showed off a new set of relations between singularities of amplitudes, found by digging in to the structure of the equations discovered by Landau that control this behavior. He and his collaborators are proposing a new way to keep track of these things involving “minimal cuts”, a clear pun on the “maximal cuts” that have been of great use to other parts of the community. Whether this has more or less staying power than “negative geometries” remains to be seen.

Closing Tuesday, Shruti Paranjape showed there was more to discover about the simplest amplitudes, called “tree amplitudes”. By asking why these amplitudes are sometimes equal to zero, she was able to draw a connection to the “double-copy” structure that links the theory of the strong force and the theory of gravity. Johannes Henn’s talk noticed an intriguing pattern. A while back, I had looked into under which circumstances amplitudes were positive. Henn found that “positive” is an understatement. In a certain region, the amplitudes we were looking at turn out to not just be positive, but also always decreasing, and also with second derivative always positive. In fact, the derivatives appear to alternate, always with one sign or the other as one takes more derivatives. Henn is calling this unusual property “completely monotonous”, and trying to figure out how widely it holds.

Wednesday had a more mathematical theme. Bernd Sturmfels began with a “review talk” that largely focused on his own work on the space of curves with marked points, including a surprising analogy between amplitudes and the likelihood functions one needs to minimize in machine learning. Lauren Williams was the other “actual mathematician” of the day, and covered her work on various topics related to the amplituhedron.

The remaining talks on Wednesday were not literally by mathematicians, but were “mathematically informed”. Carolina Figueiredo and Hayden Lee talked about work with Nima Arkani-Hamed on different projects. Figueiredo’s talk covered recent developments in the “curve integral formalism”, a recent step in Nima’s quest to geometrize everything in sight, this time in the context of more realistic theories. The talk, which like those Nima gives used tablet-written slides, described new insights one can gain from this picture, including new pictures of how more complicated amplitudes can be built up of simpler ones. If you want to understand the curve integral formalism further, I’d actually suggest instead looking at Mark Spradlin’s slides from later that day. The second part of Spradlin’s talk dealt with an area Figueiredo marked for future research, including fermions in the curve integral picture. I confess I’m still not entirely sure what the curve integral formalism is good for, but Spradlin’s talk gave me a better idea of what it’s doing. (The first part of his talk was on a different topic, exploring the space of string-like amplitudes to figure out which ones are actually consistent.)

Hayden Lee’s talk mentions the emergence of time, but the actual story is a bit more technical. Lee and collaborators are looking at cosmological correlators, observables like scattering amplitudes but for cosmology. Evaluating these is challenging with standard techniques, but can be approached with some novel diagram-based rules which let the results be described in terms of the measurable quantities at the end in a kind of “amplituhedron-esque” way.

Aidan Herderschee and Mariana Carrillo González had talks on Wednesday on ways of dealing with curved space. Herderschee talked about how various amplitudes techniques need to be changed to deal with amplitudes in anti-de-Sitter space, with difference equations replacing differential equations and sum-by-parts relations replacing integration-by-parts relations. Carrillo González looked at curved space through the lens of a special kind of toy model theory called a self-dual theory, which allowed her to do cosmology-related calculations using a double-copy technique.

Finally, Stephen Sharpe had the second review talk on Wednesday. This was another “outside guest” talk, a discussion from someone who does Lattice QCD about how they have been using their methods to calculate scattering amplitudes. They seem to count the number of particles a bit differently than we do, I’m curious whether this came up in the question session.

At Quanta This Week, and Some Bonus Material

When I moved back to Denmark, I mentioned that I was planning to do more science journalism work. The first fruit of that plan is up this week: I have a piece at Quanta Magazine about a perennially trendy topic in physics, the S-matrix.

It’s been great working with Quanta again. They’ve been thorough, attentive to the science, and patient with my still-uncertain life situation. I’m quite likely to have more pieces there in future, and I’ve got ideas cooking with other outlets as well, so stay tuned!

My piece with Quanta is relatively short, the kind of thing they used to label a “blog” rather than say a “feature”. Since the S-matrix is a pretty broad topic, there were a few things I couldn’t cover there, so I thought it would be nice to discuss them here. You can think of this as a kind of “bonus material” section for the piece. So before reading on, read my piece at Quanta first!

Welcome back!

At Quanta I wrote a kind of cartoon of the S-matrix, asking you to think about it as a matrix of probabilities, with rows for input particles and columns for output particles. There are a couple different simplifications I snuck in there, the pop physicist’s “lies to children“. One, I already flag in the piece: the entries aren’t really probabilities, they’re complex numbers, probability amplitudes.

There’s another simplification that I didn’t have space to flag. The rows and columns aren’t just lists of particles, they’re lists of particles in particular states.

What do I mean by states? A state is a complete description of a particle. A particle’s state includes its energy and momentum, including the direction it’s traveling in. It includes its spin, and the direction of its spin: for example, clockwise or counterclockwise? It also includes any charges, from the familiar electric charge to the color of a quark.

This makes the matrix even bigger than you might have thought. I was already describing an infinite matrix, one where you can have as many columns and rows as you can imagine numbers of colliding particles. But the number of rows and columns isn’t just infinite, but uncountable, as many rows and columns as there are different numbers you can use for energy and momentum.

For some of you, an uncountably infinite matrix doesn’t sound much like a matrix. But for mathematicians familiar with vector spaces, this is totally reasonable. Even if your matrix is infinite, or even uncountably infinite, it can still be useful to think about it as a matrix.

Another subtlety, which I’m sure physicists will be howling at me about: the Higgs boson is not supposed to be in the S-matrix!

In the article, I alluded to the idea that the S-matrix lets you “hide” particles that only exist momentarily inside of a particle collision. The Higgs is precisely that sort of particle, an unstable particle. And normally, the S-matrix is supposed to only describe interactions between stable particles, particles that can survive all the way to infinity.

In my defense, if you want a nice table of probabilities to put in an article, you need an unstable particle: interactions between stable particles depend on their energy and momentum, sometimes in complicated ways, while a single unstable particle will decay into a reliable set of options.

More technically, there are also contexts in which it’s totally fine to think about an S-matrix between unstable particles, even if it’s not usually how we use the idea.

My piece also didn’t have a lot of room to discuss new developments. I thought at minimum I’d say a bit more about the work of the young people I mentioned. You can think of this as an appetizer: there are a lot of people working on different aspects of this subject these days.

Part of the initial inspiration for the piece was when an editor at Quanta noticed a recent paper by Christian Copetti, Lucía Cordova, and Shota Komatsu. The paper shows an interesting case, where one of the “logical” conditions imposed in the original S-matrix bootstrap doesn’t actually apply. It ended up being too technical for the Quanta piece, but I thought I could say a bit about it, and related questions, here.

Some of the conditions imposed by the original bootstrappers seem unavoidable. Quantum mechanics makes no sense if doesn’t compute probabilities, and probabilities can’t be negative, or larger than one, so we’d better have an S-matrix that obeys those rules. Causality is another big one: we probably shouldn’t have an S-matrix that lets us send messages back in time and change the past.

Other conditions came from a mixture of intuition and observation. Crossing is a big one here. Crossing tells you that you can take an S-matrix entry with in-coming particles, and relate it to a different S-matrix entry with out-going anti-particles, using techniques from the calculus of complex numbers.

Crossing may seem quite obscure, but after some experience with S-matrices it feels obvious and intuitive. That’s why for an expert, results like the paper by Copetti, Cordova, and Komatsu seem so surprising. What they found was that a particularly exotic type of symmetry, called a non-invertible symmetry, was incompatible with crossing symmetry. They could find consistent S-matrices for theories with these strange non-invertible symmetries, but only if they threw out one of the basic assumptions of the bootstrap.

This was weird, but upon reflection not too weird. In theories with non-invertible symmetries, the behaviors of different particles are correlated together. One can’t treat far away particles as separate, the way one usually does with the S-matrix. So trying to “cross” a particle from one side of a process to another changes more than it usually would, and you need a more sophisticated approach to keep track of it. When I talked to Cordova and Komatsu, they related this to another concept called soft theorems, aspects of which have been getting a lot of attention and funding of late.

In the meantime, others have been trying to figure out where the crossing rules come from in the first place.

There were attempts in the 1970’s to understand crossing in terms of other fundamental principles. They slowed in part because, as the original S-matrix bootstrap was overtaken by QCD, there was less motivation to do this type of work anymore. But they also ran into a weird puzzle. When they tried to use the rules of crossing more broadly, only some of the things they found looked like S-matrices. Others looked like stranger, meaningless calculations.

A recent paper by Simon Caron-Huot, Mathieu Giroux, Holmfridur Hannesdottir, and Sebastian Mizera revisited these meaningless calculations, and showed that they aren’t so meaningless after all. In particular, some of them match well to the kinds of calculations people wanted to do to predict gravitational waves from colliding black holes.

Imagine a pair of black holes passing close to each other, then scattering away in different directions. Unlike particles in a collider, we have no hope of catching the black holes themselves. They’re big classical objects, and they will continue far away from us. We do catch gravitational waves, emitted from the interaction of the black holes.

This different setup turns out to give the problem a very different character. It ends up meaning that instead of the S-matrix, you want a subtly different mathematical object, one related to the original S-matrix by crossing relations. Using crossing, Caron-Huot, Giroux, Hannesdottir and Mizera found many different quantities one could observe in different situations, linked by the same rules that the original S-matrix bootstrappers used to relate S-matrix entries.

The work of these two groups is just some of the work done in the new S-matrix program, but it’s typical of where the focus is going. People are trying to understand the general rules found in the past. They want to know where they came from, and as a consequence, when they can go wrong. They have a lot to learn from the older papers, and a lot of new insights come from diligent reading. But they also have a lot of new insights to discover, based on the new tools and perspectives of the modern day. For the most part, they don’t expect to find a new unified theory of physics from bootstrapping alone. But by learning how S-matrices work in general, they expect to find valuable knowledge no matter how the future goes.

How Subfields Grow

A commenter recently asked me about the different “tribes” in my sub-field. I’ve been working in an area called “amplitudeology”, where we try to find more efficient ways to make predictions (calculate “scattering amplitudes”) for particle physics and gravitational waves. I plan to do a longer post on the “tribes” of amplitudeology…but not this week.

This week, I’ve got a simpler goal. I want to talk about where these kinds of “tribes” come from, in general. A sub-field is a group of researchers focused on a particular idea, or a particular goal. How do those groups change over time? How do new sub-groups form? For the amplitudes fans in the audience, I’ll use amplitudeology examples to illustrate.

The first way subfields gain new tribes is by differentiation. Do a PhD or a Postdoc with someone in a subfield, and you’ll learn that subfield’s techniques. That’s valuable, but probably not enough to get you hired: if you’re just a copy of your advisor, then the field just needs your advisor: research doesn’t need to be done twice. You need to differentiate yourself, finding a variant of what your advisor does where you can excel. The most distinct such variants go on to form distinct tribes of their own. This can also happen for researchers at the same level who collaborate as Postdocs. Each has to show something new, beyond what they did as a team. In my sub-field, it’s the source of some of the bigger tribes. Lance Dixon, Zvi Bern, and David Kosower made their names working together, but when they found long-term positions they made new tribes of their own. Zvi Bern focused on supergravity, and later on gravitational waves, while Lance Dixon was a central figure in the symbology bootstrap.

(Of course, if you differentiate too far you end up in a different sub-field, or a different field altogether. Jared Kaplan was an amplitudeologist, but I wouldn’t call Anthropic an amplitudeology project, although it would help my job prospects if it was!)

The second way subfields gain new tribes is by bridges. Sometimes, a researcher in a sub-field needs to collaborate with someone outside of that sub-field. These collaborations can just be one-and-done, but sometimes they strike up a spark, and people in each sub-field start realizing they have a lot more in common than they realized. They start showing up to each other’s conferences, and eventually identifying as two tribes in a single sub-field. An example from amplitudeology is the group founded by Dirk Kreimer, with a long track record of interesting work on the boundary between math and physics. They didn’t start out interacting with the “amplitudeology” community itself, but over time they collaborated with them more and more, and now I think it’s fair to say they’re a central part of the sub-field.

A third way subfields gain new tribes is through newcomers. Sometimes, someone outside of a subfield will decide they have something to contribute. They’ll read up on the latest papers, learn the subfield’s techniques, and do something new with them: applying them to a new problem of their own interest, or applying their own methods to a problem in the subfield. Because these people bring something new, either in what they work on or how they do it, they often spin off new tribes. Many new tribes in amplitudeology have come from this process, from Edward Witten’s work on the twistor string bringing in twistor approaches to Nima Arkani-Hamed’s idiosyncratic goals and methods.

There are probably other ways subfields gain new tribes, but these are the ones I came up with. If you think of more, let me know in the comments!

What’s in a Subfield?

A while back, someone asked me what my subfield, amplitudeology, is really about. I wrote an answer to that here, a short-term and long-term perspective that line up with the stories we often tell about the field. I talked about how we try to figure out ways to calculate probabilities faster, first for understanding the output of particle colliders like the LHC, then more recently for gravitational wave telescopes. I talked about how the philosophy we use for that carries us farther, how focusing on the minimal information we need to make a prediction gives us hope that we can generalize and even propose totally new theories.

The world doesn’t follow stories, though, not quite so neatly. Try to define something as simple as the word “game” and you run into trouble. Some games have a winner and a loser, some games everyone is on one team, and some games don’t have winners or losers at all. Games can involve physical exercise, computers, boards and dice, or just people telling stories. They can be played for fun, or for money, silly or deadly serious. Most have rules, but some don’t even have that. Instead, games are linked by history: a series of resemblances, people saying that “this” is a game because it’s kind of like “that”.

A subfield isn’t just a word, it’s a group of people. So subfields aren’t defined just by resemblance. Instead, they’re defined by practicality.

To ask what amplitudeology is really about, think about why you might want to call yourself an amplitudeologist. It could be a question of goals, certainly: you might care a lot about making better predictions for the LHC, or you could have some other grand story in mind about how amplitudes will save the world. Instead, though, it could be a matter of training: you learned certain methods, certain mathematics, a certain perspective, and now you apply it to your research, even if it goes further afield from what was considered “amplitudeology” before. It could even be a matter of community, joining with others who you think do cool stuff, even if you don’t share exactly the same goals or the same methods.

Calling yourself an amplitudeologist means you go to their conferences and listen to their talks, means you look to them to collaborate and pay attention to their papers. Those kinds of things define a subfield: not some grand mission statement, but practical questions of interest, what people work on and know and where they’re going with that. Instead of one story, like every other word, amplitudeology has a practical meaning that shifts and changes with time. That’s the way subfields should be: useful to the people who practice them.