Amplitudes 2023 Retrospective

I’m back from CERN this week, with a bit more time to write, so I thought I’d share some thoughts about last week’s Amplitudes conference.

One thing I got wrong in last week’s post: I’ve now been told only 213 people actually showed up in person, as opposed to the 250-ish estimate I had last week. This may seem fewer than Amplitudes in Prague had, but it seems likely they had a few fewer show up than appeared on the website. Overall, the field is at least holding steady from year to year, and definitely has grown since the pandemic (when 2019’s 175 was already a very big attendance).

It was cool having a conference in CERN proper, surrounded by the history of European particle physics. The lecture hall had an abstract particle collision carved into the wood, and the visitor center would in principle have had Standard Model coffee mugs were they not sold out until next May. (There was still enough other particle physics swag, Swiss chocolate, and Swiss chocolate that was also particle physics swag.) I’d planned to stay on-site at the CERN hostel, but I ended up appreciated not doing that: the folks who did seemed to end up a bit cooped up by the end of the conference, even with the conference dinner as a chance to get out.

Past Amplitudes conferences have had associated public lectures. This time we had a not-supposed-to-be-public lecture, a discussion between Nima Arkani-Hamed and Beate Heinemann about the future of particle physics. Nima, prominent as an amplitudeologist, also has a long track record of reasoning about what might lie beyond the Standard Model. Beate Heinemann is an experimentalist, one who has risen through the ranks of a variety of different particle physics experiments, ending up well-positioned to take a broad view of all of them.

It would have been fun if the discussion erupted into an argument, but despite some attempts at provocative questions from the audience that was not going to happen, as Beate and Nima have been friends for a long time. Instead, they exchanged perspectives: on what’s coming up experimentally, and what theories could explain it. Both argued that it was best to have many different directions, a variety of experiments covering a variety of approaches. (There wasn’t any evangelism for particular experiments, besides a joking sotto voce mention of a muon collider.) Nima in particular advocated that, whether theorist or experimentalist, you have to have some belief that what you’re doing could lead to a huge breakthrough. If you think of yourself as just a “foot soldier”, covering one set of checks among many, then you’ll lose motivation. I think Nima would agree that this optimism is irrational, but necessary, sort of like how one hears (maybe inaccurately) that most new businesses fail, but someone still needs to start businesses.

Michelangelo Mangano’s talk on Thursday covered similar ground, but with different emphasis. He agrees that there are still things out there worth discovering: that our current model of the Higgs, for instance, is in some ways just a guess: a simplest-possible answer that doesn’t explain as much as we’d like. But he also emphasized that Standard Model physics can be “new physics” too. Just because we know the model doesn’t mean we can calculate its consequences, and there are a wealth of results from the LHC that improve our models of protons, nuclei, and the types of physical situations they partake in, without changing the Standard Model.

We saw an impressive example of this in Gregory Korchemsky’s talk on Wednesday. He presented an experimental mystery, an odd behavior in the correlation of energies of jets of particles at the LHC. These jets can include a very large number of particles, enough to make it very hard to understand them from first principles. Instead, Korchemsky tried out our field’s favorite toy model, where such calculations are easier. By modeling the situation in the limit of a very large number of particles, he was able to reproduce the behavior of the experiment. The result was a reminder of what particle physics was like before the Standard Model, and what it might become again: partial models to explain odd observations, a quest to use the tools of physics to understand things we can’t just a priori compute.

On the other hand, amplitudes does do a priori computations pretty well as well. Fabrizio Caola’s talk opened the conference by reminding us just how much our precise calculations can do. He pointed out that the LHC has only gathered 5% of its planned data, and already it is able to rule out certain types of new physics to fairly high energies (by ruling out indirect effects, that would show up in high-precision calculations). One of those precise calculations featured in the next talk, by Guilio Gambuti. (A FORM user, his diagrams were the basis for the header image of my Quanta article last winter.) Tiziano Peraro followed up with a technique meant to speed up these kinds of calculations, a trick to simplify one of the more computationally intensive steps in intersection theory.

The rest of Monday was more mathematical, with talks by Zeno Capatti, Jaroslav Trnka, Chia-Kai Kuo, Anastasia Volovich, Francis Brown, Michael Borinsky, and Anna-Laura Sattelberger. Borinksy’s talk felt the most practical, a refinement of his numerical methods complete with some actual claims about computational efficiency. Francis Brown discussed an impressively powerful result, a set of formulas that manages to unite a variety of invariants of Feynman diagrams under a shared explanation.

Tuesday began with what I might call “visitors”: people from adjacent fields with an interest in amplitudes. Alday described how the duality between string theory in AdS space and super Yang-Mills on the boundary can be used to get quite concrete information about string theory, calculating how the theory’s amplitudes are corrected by the curvature of AdS space using a kind of “bootstrap” method that felt nicely familiar. Tim Cohen talked about a kind of geometric picture of theories that extend the Standard Model, including an interesting discussion of whether it’s really “geometric”. Marko Simonovic explained how the integration techniques we develop in scattering amplitudes can also be relevant in cosmology, especially for the next generation of “sky mappers” like the Euclid telescope. This talk was especially interesting to me since this sort of cosmology has a significant presence at CEA Paris-Saclay. Along those lines an interesting paper, “Cosmology meets cohomology”, showed up during the conference. I haven’t had a chance to read it yet!

Just before lunch, we had David Broadhurst give one of his inimitable talks, complete with number theory, extremely precise numerics, and literary and historical references (apparently, Källén died flying his own plane). He also remedied a gap in our whimsically biological diagram naming conventions, renaming the pedestrian “double-box” as a (in this context, Orwellian) lobster. Karol Kampf described unusual structures in a particular Effective Field Theory, while Henriette Elvang’s talk addressed what would become a meaningful subtheme of the conference, where methods from the mathematical field of optimization help amplitudes researchers constrain the space of possible theories. Giulia Isabella covered another topic on this theme later in the day, though one of her group’s selling points is managing to avoid quite so heavy-duty computations.

The other three talks on Tuesday dealt with amplitudes techniques for gravitational wave calculations, as did the first talk on Wednesday. Several of the calculations only dealt with scattering black holes, instead of colliding ones. While some of the results can be used indirectly to understand the colliding case too, a method to directly calculate behavior of colliding black holes came up again and again as an important missing piece.

The talks on Wednesday had to start late, owing to a rather bizarre power outage (the lights in the room worked fine, but not the projector). Since Wednesday was the free afternoon (home of quickly sold-out CERN tours), this meant there were only three talks: Veneziano’s talk on gravitational scattering, Korchemsky’s talk, and Nima’s talk. Nima famously never finishes on time, and this time attempted to control his timing via the surprising method of presenting, rather than one topic, five “abstracts” on recent work that he had not yet published. Even more surprisingly, this almost worked, and he didn’t run too ridiculously over time, while still managing to hint at a variety of ways that the combinatorial lessons behind the amplituhedron are gradually yielding useful perspectives on more general realistic theories.

Thursday, Andrea Puhm began with a survey of celestial amplitudes, a topic that tries to build the same sort of powerful duality used in AdS/CFT but for flat space instead. They’re gradually tackling the weird, sort-of-theory they find on the boundary of flat space. The two next talks, by Lorenz Eberhardt and Hofie Hannesdottir, shared a collaborator in common, namely Sebastian Mizera. They also shared a common theme, taking a problem most people would have assumed was solved and showing that approaching it carefully reveals extensive structure and new insights.

Cristian Vergu, in turn, delved deep into the literature to build up a novel and unusual integration method. We’ve chatted quite a bit about it at the Niels Bohr Institute, so it was nice to see it get some attention on the big stage. We then had an afternoon of trips beyond polylogarithms, with talks by Anne Spiering, Christoph Nega, and Martijn Hidding, each pushing the boundaries of what we can do with our hardest-to-understand integrals. Einan Gardi and Ruth Britto finished the day, with a deeper understanding of the behavior of high-energy particles and a new more mathematically compatible way of thinking about “cut” diagrams, respectively.

On Friday, João Penedones gave us an update on a technique with some links to the effective field theory-optimization ideas that came up earlier, one that “bootstraps” whole non-perturbative amplitudes. Shota Komatsu talked about an intriguing variant of the “planar” limit, one involving large numbers of particles and a slick re-writing of infinite sums of Feynman diagrams. Grant Remmen and Cliff Cheung gave a two-parter on a bewildering variety of things that are both surprisingly like, and surprisingly unlike, string theory: important progress towards answering the question “is string theory unique?”

Friday afternoon brought the last three talks of the conference. James Drummond had more progress trying to understand the symbol letters of supersymmetric Yang-Mills, while Callum Jones showed how Feynman diagrams can apply to yet another unfamiliar field, the study of vortices and their dynamics. Lance Dixon closed the conference without any Greta Thunberg references, but with a result that explains last year’s mystery of antipodal duality. The explanation involves an even more mysterious property called antipodal self-duality, so we’re not out of work yet!

11 thoughts on “Amplitudes 2023 Retrospective

  1. gnvdude's avatargnvdude

    Great debrief. I’m a layman and just a non-academic. But one thing I like doing is going over to youtube and looking up the people you mentioned in your write-up and seeing what videos they have available on past lectures or interviews. I am able to follow what you guys are doing but I don’t understand the mathematics or physics. Just follow what science is doing in regard to understanding consciousness and the universe and world we live in.

    Like

    Reply
  2. kashyap vasavada's avatarkashyap vasavada

    Hi!
    As a complete novice, I have a general question about development of amplitude method. As I understand at LHC, to get approximate idea of cross sections, people still use old fashioned Feynman tree diagrams. Do you think amplitude method will ever be used for such calculations?

    Like

    Reply
    1. 4gravitons's avatar4gravitons Post author

      Oh, amplitudes methods are definitely already being used! Though it depends a bit on what you mean.

      Basically, “amplitudes methods” covers a lot of different ideas, related to new techniques for calculating amplitudes and cross sections. Amplitudes researchers developed differential equation techniques that are now very widely used for LHC calculations, but they still do use Feynman diagrams usually so they may not be what you’re thinking of. Other techniques to avoid Feynman diagrams are used sometimes, but not always: generalized unitarity, for instance, is a technique that is very useful when you have an amplitude with large numbers of particles for one-loop calculations, and there is a version of it called numerical unitarity which has been very useful at two loops. Finally, some of our techniques are much more speculative: the “bootstrap” techniques I’ve worked on, where we essentially “guess” an amplitude without any Feynman diagrams or integrals, are currently mostly applied to toy models, but people keep thinking about ways to apply them to real LHC calculations and occasionally there are new ideas.

      Like

      Reply
      1. kashyap vasavada's avatarkashyap vasavada

        Thanks! Can you give a reference to such calculations
        which go beyond old fashioned Feynman diagrams , which are used by experimentalists at LHC to understand data and which are readily available on line?

        Like

        Reply
  3. mitchellporter's avatarmitchellporter

    a result that explains last year’s mystery of antipodal duality

    That made me sit up!

    [in terms of] antipodal self-duality

    OK. 🙂

    I haven’t thought about this for a while. But isn’t it possible that this will be true for a “purely mathematical” reason? The antipodal operation seems to be something like reversing the order of a wedge product of differential forms in some big kinematic space that you’re integrating over, in order to evaluate a scattering amplitude. (I’m sure that’s oversimplifying, but hopefully you recognize some resemblance to the real thing.) Seems like the kind of thing that might be explained by symmetries of the space and/or the object and/or the algebraic formulas…

    Like

    Reply
    1. 4gravitons's avatar4gravitons Post author

      So, on one level, I absolutely expect the answer to be “purely mathematical”, but that doesn’t preclude it having physical content too!

      I absolutely think it could have something to do with some symmetry of the integrand, maybe related to the graphs in some way, some sort of self-duality where some graphs are mapped into other ones. (A colleague recently reminded me that something like this happens at one loop in this paper.)

      That doesn’t mean it’s “not physical”, though. This is only happening in a particular physical theory, in particular observables, so it does tell you something about that theory and those observables. Their mathematical structure would turn out to have this interesting property, and others wouldn’t. And that’s kind of how every symmetry in physics works. 😉

      Liked by 1 person

      Reply
      1. mitchellporter's avatarmitchellporter

        It turns out that in the Connes-Kreimer Hopf algebra of renormalization, the antipode generates the counterterms! But I’ve no time to see if this makes any sense in terms of antipodal self-duality…

        Like

        Reply
        1. 4gravitons's avatar4gravitons Post author

          Interesting! The relationship can’t be 100% direct, in part because Connes-Kreimer is the Hopf algebra of renormalization and not of the functions (though I think they’re supposed to be compatible? Not sure to what extent that’s proved), and in part because N=4 super Yang-Mills is UV finite and thus has no counterterms. Still, it’s a context in which the antipode actually does something physically meaningful, so it should tell us something

          Like

          Reply

Leave a reply to mitchellporter Cancel reply