Tag Archives: amplitudes

Amplitudes 2023 Retrospective

I’m back from CERN this week, with a bit more time to write, so I thought I’d share some thoughts about last week’s Amplitudes conference.

One thing I got wrong in last week’s post: I’ve now been told only 213 people actually showed up in person, as opposed to the 250-ish estimate I had last week. This may seem fewer than Amplitudes in Prague had, but it seems likely they had a few fewer show up than appeared on the website. Overall, the field is at least holding steady from year to year, and definitely has grown since the pandemic (when 2019’s 175 was already a very big attendance).

It was cool having a conference in CERN proper, surrounded by the history of European particle physics. The lecture hall had an abstract particle collision carved into the wood, and the visitor center would in principle have had Standard Model coffee mugs were they not sold out until next May. (There was still enough other particle physics swag, Swiss chocolate, and Swiss chocolate that was also particle physics swag.) I’d planned to stay on-site at the CERN hostel, but I ended up appreciated not doing that: the folks who did seemed to end up a bit cooped up by the end of the conference, even with the conference dinner as a chance to get out.

Past Amplitudes conferences have had associated public lectures. This time we had a not-supposed-to-be-public lecture, a discussion between Nima Arkani-Hamed and Beate Heinemann about the future of particle physics. Nima, prominent as an amplitudeologist, also has a long track record of reasoning about what might lie beyond the Standard Model. Beate Heinemann is an experimentalist, one who has risen through the ranks of a variety of different particle physics experiments, ending up well-positioned to take a broad view of all of them.

It would have been fun if the discussion erupted into an argument, but despite some attempts at provocative questions from the audience that was not going to happen, as Beate and Nima have been friends for a long time. Instead, they exchanged perspectives: on what’s coming up experimentally, and what theories could explain it. Both argued that it was best to have many different directions, a variety of experiments covering a variety of approaches. (There wasn’t any evangelism for particular experiments, besides a joking sotto voce mention of a muon collider.) Nima in particular advocated that, whether theorist or experimentalist, you have to have some belief that what you’re doing could lead to a huge breakthrough. If you think of yourself as just a “foot soldier”, covering one set of checks among many, then you’ll lose motivation. I think Nima would agree that this optimism is irrational, but necessary, sort of like how one hears (maybe inaccurately) that most new businesses fail, but someone still needs to start businesses.

Michelangelo Mangano’s talk on Thursday covered similar ground, but with different emphasis. He agrees that there are still things out there worth discovering: that our current model of the Higgs, for instance, is in some ways just a guess: a simplest-possible answer that doesn’t explain as much as we’d like. But he also emphasized that Standard Model physics can be “new physics” too. Just because we know the model doesn’t mean we can calculate its consequences, and there are a wealth of results from the LHC that improve our models of protons, nuclei, and the types of physical situations they partake in, without changing the Standard Model.

We saw an impressive example of this in Gregory Korchemsky’s talk on Wednesday. He presented an experimental mystery, an odd behavior in the correlation of energies of jets of particles at the LHC. These jets can include a very large number of particles, enough to make it very hard to understand them from first principles. Instead, Korchemsky tried out our field’s favorite toy model, where such calculations are easier. By modeling the situation in the limit of a very large number of particles, he was able to reproduce the behavior of the experiment. The result was a reminder of what particle physics was like before the Standard Model, and what it might become again: partial models to explain odd observations, a quest to use the tools of physics to understand things we can’t just a priori compute.

On the other hand, amplitudes does do a priori computations pretty well as well. Fabrizio Caola’s talk opened the conference by reminding us just how much our precise calculations can do. He pointed out that the LHC has only gathered 5% of its planned data, and already it is able to rule out certain types of new physics to fairly high energies (by ruling out indirect effects, that would show up in high-precision calculations). One of those precise calculations featured in the next talk, by Guilio Gambuti. (A FORM user, his diagrams were the basis for the header image of my Quanta article last winter.) Tiziano Peraro followed up with a technique meant to speed up these kinds of calculations, a trick to simplify one of the more computationally intensive steps in intersection theory.

The rest of Monday was more mathematical, with talks by Zeno Capatti, Jaroslav Trnka, Chia-Kai Kuo, Anastasia Volovich, Francis Brown, Michael Borinsky, and Anna-Laura Sattelberger. Borinksy’s talk felt the most practical, a refinement of his numerical methods complete with some actual claims about computational efficiency. Francis Brown discussed an impressively powerful result, a set of formulas that manages to unite a variety of invariants of Feynman diagrams under a shared explanation.

Tuesday began with what I might call “visitors”: people from adjacent fields with an interest in amplitudes. Alday described how the duality between string theory in AdS space and super Yang-Mills on the boundary can be used to get quite concrete information about string theory, calculating how the theory’s amplitudes are corrected by the curvature of AdS space using a kind of “bootstrap” method that felt nicely familiar. Tim Cohen talked about a kind of geometric picture of theories that extend the Standard Model, including an interesting discussion of whether it’s really “geometric”. Marko Simonovic explained how the integration techniques we develop in scattering amplitudes can also be relevant in cosmology, especially for the next generation of “sky mappers” like the Euclid telescope. This talk was especially interesting to me since this sort of cosmology has a significant presence at CEA Paris-Saclay. Along those lines an interesting paper, “Cosmology meets cohomology”, showed up during the conference. I haven’t had a chance to read it yet!

Just before lunch, we had David Broadhurst give one of his inimitable talks, complete with number theory, extremely precise numerics, and literary and historical references (apparently, Källén died flying his own plane). He also remedied a gap in our whimsically biological diagram naming conventions, renaming the pedestrian “double-box” as a (in this context, Orwellian) lobster. Karol Kampf described unusual structures in a particular Effective Field Theory, while Henriette Elvang’s talk addressed what would become a meaningful subtheme of the conference, where methods from the mathematical field of optimization help amplitudes researchers constrain the space of possible theories. Giulia Isabella covered another topic on this theme later in the day, though one of her group’s selling points is managing to avoid quite so heavy-duty computations.

The other three talks on Tuesday dealt with amplitudes techniques for gravitational wave calculations, as did the first talk on Wednesday. Several of the calculations only dealt with scattering black holes, instead of colliding ones. While some of the results can be used indirectly to understand the colliding case too, a method to directly calculate behavior of colliding black holes came up again and again as an important missing piece.

The talks on Wednesday had to start late, owing to a rather bizarre power outage (the lights in the room worked fine, but not the projector). Since Wednesday was the free afternoon (home of quickly sold-out CERN tours), this meant there were only three talks: Veneziano’s talk on gravitational scattering, Korchemsky’s talk, and Nima’s talk. Nima famously never finishes on time, and this time attempted to control his timing via the surprising method of presenting, rather than one topic, five “abstracts” on recent work that he had not yet published. Even more surprisingly, this almost worked, and he didn’t run too ridiculously over time, while still managing to hint at a variety of ways that the combinatorial lessons behind the amplituhedron are gradually yielding useful perspectives on more general realistic theories.

Thursday, Andrea Puhm began with a survey of celestial amplitudes, a topic that tries to build the same sort of powerful duality used in AdS/CFT but for flat space instead. They’re gradually tackling the weird, sort-of-theory they find on the boundary of flat space. The two next talks, by Lorenz Eberhardt and Hofie Hannesdottir, shared a collaborator in common, namely Sebastian Mizera. They also shared a common theme, taking a problem most people would have assumed was solved and showing that approaching it carefully reveals extensive structure and new insights.

Cristian Vergu, in turn, delved deep into the literature to build up a novel and unusual integration method. We’ve chatted quite a bit about it at the Niels Bohr Institute, so it was nice to see it get some attention on the big stage. We then had an afternoon of trips beyond polylogarithms, with talks by Anne Spiering, Christoph Nega, and Martijn Hidding, each pushing the boundaries of what we can do with our hardest-to-understand integrals. Einan Gardi and Ruth Britto finished the day, with a deeper understanding of the behavior of high-energy particles and a new more mathematically compatible way of thinking about “cut” diagrams, respectively.

On Friday, João Penedones gave us an update on a technique with some links to the effective field theory-optimization ideas that came up earlier, one that “bootstraps” whole non-perturbative amplitudes. Shota Komatsu talked about an intriguing variant of the “planar” limit, one involving large numbers of particles and a slick re-writing of infinite sums of Feynman diagrams. Grant Remmen and Cliff Cheung gave a two-parter on a bewildering variety of things that are both surprisingly like, and surprisingly unlike, string theory: important progress towards answering the question “is string theory unique?”

Friday afternoon brought the last three talks of the conference. James Drummond had more progress trying to understand the symbol letters of supersymmetric Yang-Mills, while Callum Jones showed how Feynman diagrams can apply to yet another unfamiliar field, the study of vortices and their dynamics. Lance Dixon closed the conference without any Greta Thunberg references, but with a result that explains last year’s mystery of antipodal duality. The explanation involves an even more mysterious property called antipodal self-duality, so we’re not out of work yet!

At Amplitudes 2023 at CERN

I’m at the big yearly conference of my sub-field this week, called Amplitudes. This year, surprisingly for the first time, it’s at the very appropriate location of CERN.

Somewhat overshadowed by the very picturesque Alps

Amplitudes keeps on growing. In 2019, we had 175 participants. We were on Zoom in 2020 and 2021, with many more participants, but that probably shouldn’t count. In Prague last year we had 222. This year, I’ve been told we have even more, something like 250 participants (the list online is bigger, but includes people joining on Zoom). We’ve grown due to new students, but also new collaborations: people from adjacent fields who find the work interesting enough to join along. This year we have mathematicians talking about D-modules, bootstrappers finding new ways to get at amplitudes in string theory, beyond-the-standard-model theorists talking about effective field theories, and cosmologists talking about the large-scale structure of the universe.

The talks have been great, from clear discussions of earlier results to fresh-off-the-presses developments, plenty of work in progress, and even one talk where the speaker’s opinion changed during the coffee break. As we’re at CERN, there’s also a through-line about the future of particle physics, with a chat between Nima Arkani-Hamed and the experimentalist Beate Heinemann on Tuesday and a talk by Michelangelo Mangano about the meaning of “new physics” on Thursday.

I haven’t had a ton of time to write, I keep getting distracted by good discussions! As such, I’ll do my usual thing, and say a bit more about specific talks in next week’s post.

Cabinet of Curiosities: The Deluxe Train Set

I’ve got a new paper out this week with Andrew McLeod. I’m thinking of it as another entry in this year’s “cabinet of curiosities”, interesting Feynman diagrams with unusual properties. Although this one might be hard to fit into a cabinet.

Over the past few years, I’ve been finding Feynman diagrams with interesting connections to Calabi-Yau manifolds, the spaces originally studied by string theorists to roll up their extra dimensions. With Andrew and other collaborators, I found an interesting family of these diagrams called traintracks, which involve higher-and-higher dimensional manifolds as they get longer and longer.

This time, we started hooking up our traintracks together.

We call diagrams like these traintrack network diagrams, or traintrack networks for short. The original traintracks just went “one way”: one family, going higher in Calabi-Yau dimension the longer they got. These networks branch out, one traintrack leading to another and another.

In principle, these are much more complicated diagrams. But we find we can work with them in almost the same way. We can find the same “starting point” we had for the original traintracks, the set of integrals used to find the Calabi-Yau manifold. We’ve even got more reliable tricks, a method recently honed by some friends of ours that consistently find a Calabi-Yau manifold inside the original traintracks.

Surprisingly, though, this isn’t enough.

It works for one type of traintrack network, a so-called “cross diagram” like this:

But for other diagrams, if the network branches any more, the trick stops working. We still get an answer, but that answer is some more general space, not just a Calabi-Yau manifold.

That doesn’t mean that these general traintrack networks don’t involve Calabi-Yaus at all, mind you: it just means this method doesn’t tell us one way or the other. It’s also possible that simpler versions of these diagrams, involving fewer particles, will once again involve Calabi-Yaus. This is the case for some similar diagrams in two dimensions. But it’s starting to raise a question: how special are the Calabi-Yau related diagrams? How general do we expect them to be?

Another fun thing we noticed has to do with differential equations. There are equations that relate one diagram to another simpler one. We’ve used them in the past to build up “ladders” of diagrams, relating each picture to one with one of its boxes “deleted”. We noticed, playing with these traintrack networks, that these equations do a bit more than we thought. “Deleting” a box can make a traintrack short, but it can also chop a traintrack in half, leaving two “dangling” pieces, one on either side.

This reminded me of an important point, one we occasionally lose track of. The best-studied diagrams related to Calabi-Yaus are called “sunrise” diagrams. If you squish together a loop in one of those diagrams, the whole diagram squishes together, becoming much simpler. Because of that, we’re used to thinking of these as diagrams with a single “geometry”, one that shows up when you don’t “squish” anything.

Traintracks, and traintrack networks, are different. “Squishing” the diagram, or “deleting” a box, gives you a simpler diagram, but not much simpler. In particular, the new diagram will still contain traintracks, and traintrack networks. That means that we really should think of each traintrack network not just as one “top geometry”, but of a collection of geometries, different Calabi-Yaus that break into different combinations of Calabi-Yaus in different ways. It’s something we probably should have anticipated, but the form these networks take is a good reminder, one that points out that we still have a lot to do if we want to understand these diagrams.

At Geometries and Special Functions for Physics and Mathematics in Bonn

I’m at a workshop this week. It’s part of a series of “Bethe Forums”, cozy little conferences run by the Bethe Center for Theoretical Physics in Bonn.

You can tell it’s an institute for theoretical physics because they have one of these, but not a “doing room”

The workshop’s title, “Geometries and Special Functions for Physics and Mathematics”, covers a wide range of topics. There are talks on Calabi-Yau manifolds, elliptic (and hyper-elliptic) polylogarithms, and cluster algebras and cluster polylogarithms. Some of the talks are by mathematicians, others by physicists.

In addition to the talks, this conference added a fun innovative element, “my favorite problem sessions”. The idea is that a speaker spends fifteen minutes introducing their “favorite problem”, then the audience spends fifteen minutes discussing it. Some treated these sessions roughly like short talks describing their work, with the open directions at the end framed as their favorite problem. Others aimed broader, trying to describe a general problem and motivate interest in people of other sub-fields.

This was a particularly fun conference for me, because the seemingly distinct topics all connect in one way or another to my own favorite problem. In our “favorite theory” of N=4 super Yang-Mills, we can describe our calculations in terms of an “alphabet” of pieces that let us figure out predictions almost “by guesswork”. These alphabets, at least in the cases we know how to handle, turn out to correspond to mathematical structures called cluster algebras. If we look at interactions of six or seven particles, these cluster algebras are a powerful guide. For eight or nine, they still seem to matter, but are much harder to use.

For ten particles, though, things get stranger. That’s because ten particles is precisely where elliptic curves, and their related elliptic polylogarithms, show up. Things then get yet more strange, and with twelve particles or more we start seeing Calabi-Yau manifolds magically show up in our calculations.

We don’t know what an “alphabet” should look like for these Calabi-Yau manifolds (but I’m working on it). Because of that, we don’t know how these cluster algebras should appear.

In my view, any explanation for the role of cluster algebras in our calculations has to extend to these cases, to elliptic polylogarithms and Calabi-Yau manifolds. Without knowing how to frame an alphabet for these things, we won’t be able to solve the lingering mysteries that fill our field.

Because of that, “my favorite problem” is one of my biggest motivations, the question that drives a large chunk of what I do. It’s what’s made this conference so much fun, and so stimulating: almost every talk had something I wanted to learn.

Cabinet of Curiosities: The Train-Ladder

I’ve got a new paper out this week, with Andrew McLeod, Roger Morales, Matthias Wilhelm, and Chi Zhang. It’s yet another entry in this year’s “cabinet of curiosities”, quirky Feynman diagrams with interesting traits.

A while back, I talked about a set of Feynman diagrams I could compute with any number of “loops”, bypassing the approximations we usually need to use in particle physics. That wasn’t the first time someone did that. Back in the 90’s, some folks figured out how to do this for so-called “ladder” diagrams. These diagrams have two legs on one end for two particles coming in, two legs on the other end for two particles going out, and a ladder in between, like so:

There are infinitely many of these diagrams, but they’re all beautifully simple, variations on a theme that can be written down in a precise mathematical way.

Change things a little bit, though, and the situation gets wildly more intractable. Let the rungs of the ladder peek through the sides, and you get something looking more like the tracks for a train:

These traintrack integrals are much more complicated. Describing them requires the mathematics of Calabi-Yau manifolds, involving higher and higher dimensions as the tracks get longer. I don’t think there’s any hope of understanding these things for all loops, at least not any time soon.

What if we aimed somewhere in between? A ladder that just started to turn traintrack?

Add just a single pair of rungs, and it turns out that things remain relatively simple. If we do this, it turns out we don’t need any complicated Calabi-Yau manifolds. We just need the simplest Calabi-Yau manifold, called an elliptic curve. It’s actually the same curve for every version of the diagram. And the situation is simple enough that, with some extra cleverness, it looks like we’ve found a trick to calculate these diagrams to any number of loops we’d like.

(Another group figured out the curve, but not the calculation trick. They’ve solved different problems, though, studying all sorts of different traintrack diagrams. They sorted out some confusion I used to have about one of those diagrams, showing it actually behaves precisely the way we expected it to. All in all, it’s been a fun example of the way different scientists sometimes hone in on the same discovery.)

These developments are exciting, because Feynman diagrams with elliptic curves are still tough to deal with. We still have whole conferences about them. These new elliptic diagrams can be a long list of test cases, things we can experiment with with any number of loops. With time, we might truly understand them as well as the ladder diagrams!

Visiting the IAS

I’m at the Institute for Advanced Study, or IAS, this week.

There isn’t a conference going on, but if you looked at the visitor list you’d be forgiven for thinking there was. We have talks in my subfield almost every day this week, two professors from my subfield here on sabbatical, and extra visitors on top of that.

The IAS is a bit of an odd place. Partly, that’s due to its physical isolation: tucked away in the woods behind Princeton, a half-hour’s walk from the nearest restaurant, it’s supposed to be a place for contemplation away from the hustle and bustle of the world.

Since the last time I visited they’ve added a futuristic new building, seen here out of my office window. The building is most notable for one wild promise: someday, they will serve dinner there.

Mostly, though, the weirdness of the IAS is due to the kind of institution it is.

Within a given country, most universities are pretty similar. Each may emphasize different teaching styles, and the US has a distinction between public and private, but (neglecting scammy for-profit universities), there are some commonalities of structure: both how they’re organized, and how they’re funded. Even between countries, different university systems have quite a bit of overlap.

The IAS, though, is not a university. It’s an independent institute. Neighboring Princeton supplies it with PhD students, but otherwise the IAS runs, and funds, itself.

There are a few other places like that around the world. The Perimeter Institute in Canada is also independent, and also borrows students from a neighboring university. CERN pools resources from several countries across Europe and beyond, Nordita from just the Nordic countries. Generalizing further, many countries have some sort of national labs or other nation-wide systems, from US Department of Energy labs like SLAC to Germany’s Max Planck Institutes.

And while universities share a lot in common, non-university institutes can be very different. Some are closely tied to a university, located inside university buildings with members with university affiliations. Others sit at a greater remove, less linked to a university or not linked at all. Some have their own funding, investments or endowments or donations, while others are mostly funded by governments, or groups of governments. I’ve heard that the IAS gets about 10% of its budget from the government, while Perimeter gets its everyday operating expenses entirely from the Canadian government and uses donations for infrastructure and the like.

So ultimately, the IAS is weird because every organization like it is weird. There are a few templates, and systems, but by and large each independent research organization is different. Understanding one doesn’t necessarily help at understanding another.

Jumpstarting Elliptic Bootstrapping

I was at a mini-conference this week, called Jumpstarting Elliptic Bootstrap Methods for Scattering Amplitudes.

I’ve done a lot of work with what we like to call “bootstrap” methods. Instead of doing a particle physics calculation in all its gory detail, we start with a plausible guess and impose requirements based on what we know. Eventually, we have the right answer pulled up “by its own bootstraps”: the only answer the calculation could have, without actually doing the calculation.

This method works very well, but so far it’s only been applied to certain kinds of calculations, involving mathematical functions called polylogarithms. More complicated calculations involve a mathematical object called an elliptic curve, and until very recently it wasn’t clear how to bootstrap them. To get people thinking about it, my colleagues Hjalte Frellesvig and Andrew McLeod asked the Carlsberg Foundation (yes, that Carlsberg) to fund a mini-conference. The idea was to get elliptic people and bootstrap people together (along with Hjalte’s tribe, intersection theory people) to hash things out. “Jumpstart people” are not a thing in physics, so despite the title they were not invited.

Anyone remember these games? Did you know that they still exist, have an educational MMO, and bought neopets?

Having the conference so soon after the yearly Elliptics meeting had some strange consequences. There was only one actual duplicate talk, but the first day of talks all felt like they would have been welcome additions to the earlier conference. Some might be functioning as “overflow”: Elliptics this year focused on discussion and so didn’t have many slots for talks, while this conference despite its discussion-focused goal had a more packed schedule. In other cases, people might have been persuaded by the more relaxed atmosphere and lack of recording or posted slides to give more speculative talks. Oliver Schlotterer’s talk was likely in this category, a discussion of the genus-two functions one step beyond elliptics that I think people at the previous conference would have found very exciting, but which involved work in progress that I could understand him being cautious about presenting.

The other days focused more on the bootstrap side, with progress on some surprising but not-quite-yet elliptic avenues. It was great to hear that Mark Spradlin is making new progress on his Ziggurat story, to hear James Drummond suggest a picture for cluster algebras that could generalize to other theories, and to get some idea of the mysterious ongoing story that animates my colleague Cristian Vergu.

There was one thing the organizers couldn’t have anticipated that ended up throwing the conference into a new light. The goal of the conference was to get people started bootstrapping elliptic functions, but in the meantime people have gotten started on their own. Roger Morales Espasa presented his work on this with several of my other colleagues. They can already reproduce a known result, the ten-particle elliptic double-box, and are well on-track to deriving something genuinely new, the twelve-particle version. It’s exciting, but it definitely makes the rest of us look around and take stock. Hopefully for the better!

Cabinet of Curiosities: The Nested Toy

I had a paper two weeks ago with a Master’s student, Alex Chaparro Pozo. I haven’t gotten a chance to talk about it yet, so I thought I should say a few words this week. It’s another entry in what I’ve been calling my cabinet of curiosities, interesting mathematical “objects” I’m sharing with the world.

I calculate scattering amplitudes, formulas that give the probability that particles scatter off each other in particular ways. While in principle I could do this with any particle physics theory, I have a favorite: a “toy model” called N=4 super Yang-Mills. N=4 super Yang-Mills doesn’t describe reality, but it lets us figure out cool new calculation tricks, and these often end up useful in reality as well.

Many scattering amplitudes in N=4 super Yang-Mills involve a type of mathematical functions called polylogarithms. These functions are especially easy to work with, but they aren’t the whole story. One we start considering more complicated situations (what if two particles collide, and eight particles come out?) we need more complicated functions, called elliptic polylogarithms.

A few years ago, some collaborators and I figured out how to calculate one of these elliptic scattering amplitudes. We didn’t do it as well as we’d like, though: the calculation was “half-done” in a sense. To do the other half, we needed new mathematical tools, tools that came out soon after. Once those tools were out, we started learning how to apply them, trying to “finish” the calculation we started.

The original calculation was pretty complicated. Two particles colliding, eight particles coming out, meant that in total we had to keep track of ten different particles. That gets messy fast. I’m pretty good at dealing with six particles, not ten. Luckily, it turned out there was a way to pretend there were six particles only: by “twisting” up the calculation, we found a toy model within the toy model: a six-particle version of the calculation. Much like the original was in a theory that doesn’t describe the real world, these six particles don’t describe six particles in that theory: they’re a kind of toy calculation within the toy model, doubly un-real.

Not quintuply-unreal though

With this nested toy model, I was confident we could do the calculation. I wasn’t confident I’d have time for it, though. This ended up making it perfect for a Master’s thesis, which is how Alex got into the game.

Alex worked his way through the calculation, programming and transforming, going from one type of mathematical functions to another (at least once because I’d forgotten to tell him the right functions to use, oops!) There were more details and subtleties than expected, but in the end everything worked out.

Then, we were scooped.

Another group figured out how to do the full, ten-particle problem, not just the toy model. That group was just “down the hall”…or would have been “down the hall” if we had been going to the office (this was 2021, after all). I didn’t hear about what they were working on until it was too late to change plans.

Alex left the field (not, as far as I know, because of this). And for a while, because of that especially thorough scooping, I didn’t publish.

What changed my mind, in part, was seeing the field develop in the meantime. It turns out toy models, and even nested toy models, are quite useful. We still have a lot of uncertainty about what to do, how to use the new calculation methods and what they imply. And usually, the best way to get through that kind of uncertainty is with simple, well-behaved toy models.

So I thought, in the end, that this might be useful. Even if it’s a toy version of something that already exists, I expect it to be an educational toy, one we can learn a lot from. So I’ve put it out into the world, as part of this year’s cabinet of curiosities.

At Elliptic Integrals in Fundamental Physics in Mainz

I’m at a conference this week. It’s named Elliptic Integrals in Fundamental Physics, but I think of it as “Elliptics 2022”, the latest in a series of conferences on elliptic integrals in particle physics.

It’s in Mainz, which you can tell from the Gutenberg street art

Elliptics has been growing in recent years, hurtling into prominence as a subfield of amplitudes (which is already a subfield of theoretical physics). This has led to growing lists of participants and a more and more packed schedule.

This year walked all of that back a bit. There were three talks a day: two one-hour talks by senior researchers and one half-hour talk by a junior researcher. The rest, as well as the whole last day, are geared to discussion. It’s an attempt to go back to the subfield’s roots. In the beginning, the Elliptics conferences drew together a small group to sort out a plan for the future, digging through the often-confusing mathematics to try to find a baseline for future progress. The field has advanced since then, but some of our questions are still almost as basic. What relations exist between different calculations? How much do we value fast numerics, versus analytical understanding? What methods do we want to preserve, and which aren’t serving us well? To answer these questions, it helps to get a few people together in one place, not to silently listen to lectures, but to question and discuss and hash things out. I may have heard a smaller range of topics at this year’s Elliptics, but due to the sheer depth we managed to probe on those fewer topics I feel like I’ve learned much more.

Since someone always asks, I should say that the talks were not recorded, but they are posting slides online, so if you’re interested in the topic you can watch there. A few people discussed new developments, some just published and some yet to be published. I discussed the work I talked about last week, and got a lot of good feedback and ideas about how to move forward.

Cabinet of Curiosities: The Coaction

I had two more papers out this week, continuing my cabinet of curiosities. I’ll talk about one of them today, and the other in (probably) two weeks.

This week, I’m talking about a paper I wrote with an excellent Master’s student, Andreas Forum. Andreas came to me looking for a project on the mathematical side. I had a rather nice idea for his project at first, to explain a proof in an old math paper so it could be used by physicists.

Unfortunately, the proof I sent him off to explain didn’t actually exist. Fortunately, by the time we figured this out Andreas had learned quite a bit of math, so he was ready for his next project: a coaction for Calabi-Yau Feynman diagrams.

We chose to focus on one particular diagram, called a sunrise diagram for its resemblance to a sun rising over the sea:

This diagram

Feynman diagrams depict paths traveled by particles. The paths are a metaphor, or organizing tool, for more complicated calculations: computations of the chances fundamental particles behave in different ways. Each diagram encodes a complicated integral. This one shows one particle splitting into many, then those many particles reuniting into one.

Do the integrals in Feynman diagrams, and you get a variety of different mathematical functions. Many of them integrate to functions called polylogarithms, and we’ve gotten really really good at working with them. We can integrate them up, simplify them, and sometimes we can guess them so well we don’t have to do the integrals at all! We can do all of that because we know how to break polylogarithm functions apart, with a mathematical operation called a coaction. The coaction chops polylogarithms up to simpler parts, parts that are easier to work with.

More complicated Feynman diagrams give more complicated functions, though. Some of them give what are called elliptic functions. You can think of these functions as involving a geometrical shape, in this case a torus.

Other functions involve more complicated geometrical shapes, in some cases very complicated. For example, some involve the Calabi-Yau manifolds studied by string theorists. These sunrise diagrams are some of the simplest to involve such complicated geometry.

Other researchers had proposed a coaction for elliptic functions back in 2018. When they derived it, though, they left a recipe for something more general. Follow the instructions in the paper, and you could in principle find a coaction for other diagrams, even the Calabi-Yau ones, if you set it up right.

I had an idea for how to set it up right, and in the grand tradition of supervisors everywhere I got Andreas to do the dirty work of applying it. Despite the delay of our false start and despite the fact that this was probably in retrospect too big a project for a normal Master’s thesis, Andreas made it work!

Our result, though, is a bit weird. The coaction is a powerful tool for polylogarithms because it chops them up finely: keep chopping, and you get down to very simple functions. Our coaction isn’t quite so fine: we don’t chop our functions into as many parts, and the parts are more mysterious, more difficult to handle.

We think these are temporary problems though. The recipe we applied turns out to be a recipe with a lot of choices to make, less like Julia Child and more like one of those books where you mix-and-match recipes. We believe the community can play with the parameters of this recipe, finding new version of the coaction for new uses.

This is one of the shiniest of the curiosities in my cabinet this year, I hope it gets put to good use.