Tag Archives: LIGO

Zoomplitudes Retrospective

During Zoomplitudes (my field’s big yearly conference, this year on Zoom) I didn’t have time to write a long blog post. I said a bit about the format, but didn’t get a chance to talk about the science. I figured this week I’d go back and give a few more of my impressions. As always, conference posts are a bit more technical than my usual posts, so regulars be warned!

The conference opened with a talk by Gavin Salam, there as an ambassador for LHC physics. Salam pointed out that, while a decent proportion of speakers at Amplitudes mention the LHC in their papers, that fraction has fallen over the years. (Another speaker jokingly wondered which of those mentions were just in the paper’s introduction.) He argued that there is still useful work for us, LHC measurements that will require serious amplitudes calculations to understand. He also brought up what seems like the most credible argument for a new, higher-energy collider: that there are important properties of the Higgs, in particular its interactions, that we still have not observed.

The next few talks hopefully warmed Salam’s heart, as they featured calculations for real-world particle physics. Nathaniel Craig and Yael Shadmi in particular covered the link between amplitudes and Standard Model Effective Field Theory (SMEFT), a method to systematically characterize corrections beyond the Standard Model. Shadmi’s talk struck me because the kind of work she described (building the SMEFT “amplitudes-style”, directly from observable information rather than more complicated proxies) is something I’d seen people speculate about for a while, but which hadn’t been done until quite recently. Now, several groups have managed it, and look like they’ve gotten essentially “all the way there”, rather than just partial results that only manage to replicate part of the SMEFT. Overall it’s much faster progress than I would have expected.

After Shadmi’s talk was a brace of talks on N=4 super Yang-Mills, featuring cosmic Galois theory and an impressively groan-worthy “origin story” joke. The final talk of the day, by Hofie Hannesdottir, covered work with some of my colleagues at the NBI. Due to coronavirus I hadn’t gotten to hear about this in person, so it was good to hear a talk on it, a blend of old methods and new priorities to better understand some old discoveries.

The next day focused on a topic that has grown in importance in our community, calculations for gravitational wave telescopes like LIGO. Several speakers focused on new methods for collisions of spinning objects, where a few different approaches are making good progress (Radu Roiban’s proposal to use higher-spin field theory was particularly interesting) but things still aren’t quite “production-ready”. The older, post-Newtonian method is still very much production-ready, as evidenced by Michele Levi’s talk that covered, among other topics, our recent collaboration. Julio Parra-Martinez discussed some interesting behavior shared by both supersymmetric and non-supersymmetric gravity theories. Thibault Damour had previously expressed doubts about use of amplitudes methods to answer this kind of question, and part of Parra-Martinez’s aim was to confirm the calculation with methods Damour would consider more reliable. Damour (who was actually in the audience, which I suspect would not have happened at an in-person conference) had already recanted some related doubts, but it’s not clear to me whether that extended to the results Parra-Martinez discussed (or whether Damour has stated the problem with his old analysis).

There were a few talks that day that didn’t relate to gravitational waves, though this might have been an accident, since both speakers also work on that topic. Zvi Bern’s talk linked to the previous day’s SMEFT discussion, with a calculation using amplitudes methods of direct relevance to SMEFT researchers. Clifford Cheung’s talk proposed a rather strange/fun idea, conformal symmetry in negative dimensions!

Wednesday was “amplituhedron day”, with a variety of talks on positive geometries and cluster algebras. Featured in several talks was “tropicalization“, a mathematical procedure that can simplify complicated geometries while still preserving essential features. Here, it was used to trim down infinite “alphabets” conjectured for some calculations into a finite set, and in doing so understand the origin of “square root letters”. The day ended with a talk by Nima Arkani-Hamed, who despite offering to bet that he could finish his talk within the half-hour slot took almost twice that. The organizers seemed to have planned for this, since there was one fewer talk that day, and as such the day ended at roughly the usual time regardless.

We also took probably the most unique conference photo I will ever appear in.

For lack of a better name, I’ll call Thursday’s theme “celestial”. The day included talks by cosmologists (including approaches using amplitudes-ish methods from Daniel Baumann and Charlotte Sleight, and a curiously un-amplitudes-related talk from Daniel Green), talks on “celestial amplitudes” (amplitudes viewed from the surface of an infinitely distant sphere), and various talks with some link to string theory. I’m including in that last category intersection theory, which has really become its own thing. This included a talk by Simon Caron-Huot about using intersection theory more directly in understanding Feynman integrals, and a talk by Sebastian Mizera using intersection theory to investigate how gravity is Yang-Mills squared. Both gave me a much better idea of the speakers’ goals. In Mizera’s case he’s aiming for something very ambitious. He wants to use intersection theory to figure out when and how one can “double-copy” theories, and might figure out why the procedure “got stuck” at five loops. The day ended with a talk by Pedro Vieira, who gave an extremely lucid and well-presented “blackboard-style” talk on bootstrapping amplitudes.

Friday was a grab-bag of topics. Samuel Abreu discussed an interesting calculation using the numerical unitarity method. It was notable in part because renormalization played a bigger role than it does in most amplitudes work, and in part because they now have a cool logo for their group’s software, Caravel. Claude Duhr and Ruth Britto gave a two-part talk on their work on a Feynman integral coaction. I’d had doubts about the diagrammatic coaction they had worked on in the past because it felt a bit ad-hoc. Now, they’re using intersection theory, and have a clean story that seems to tie everything together. Andrew McLeod talked about our work on a Feynman diagram Calabi-Yau “bestiary”, while Cristian Vergu had a more rigorous understanding of our “traintrack” integrals.

There are two key elements of a conference that are tricky to do on Zoom. You can’t do a conference dinner, so you can’t do the traditional joke-filled conference dinner speech. The end of the conference is also tricky: traditionally, this is when everyone applauds the organizers and the secretaries are given flowers. As chair for the last session, Lance Dixon stepped up to fill both gaps, with a closing speech that was both a touching tribute to the hard work of organizing the conference and a hilarious pile of in-jokes, including a participation award to Arkani-Hamed for his (unprecedented, as far as I’m aware) perfect attendance.

4gravitons, Spinning Up

I had a new paper out last week, with Michèle Levi and Andrew McLeod. But to explain it, I’ll need to clarify something about our last paper.

Two weeks ago, I told you that Andrew and Michèle and I had written a paper, predicting what gravitational wave telescopes like LIGO see when black holes collide. You may remember that LIGO doesn’t just see colliding black holes: it sees colliding neutron stars too. So why didn’t we predict what happens when neutron stars collide?

Actually, we did. Our calculation doesn’t just apply to black holes. It applies to neutron stars too. And not just neutron stars: it applies to anything of roughly the right size and shape. Black holes, neutron stars, very large grapefruits…

LIGO’s next big discovery

That’s the magic of Effective Field Theory, the “zoom lens” of particle physics. Zoom out far enough, and any big, round object starts looking like a particle. Black holes, neutron stars, grapefruits, we can describe them all using the same math.

Ok, so we can describe both black holes and neutron stars. Can we tell the difference between them?

In our last calculation, no. In this one, yes!

Effective Field Theory isn’t just a zoom lens, it’s a controlled approximation. That means that when we “zoom out” we don’t just throw out anything “too small to see”. Instead, we approximate it, estimating how big of an effect it can have. Depending on how precise we want to be, we can include more and more of these approximated effects. If our estimates are good, we’ll include everything that matters, and get a good approximation for what we’re trying to observe.

At the precision of our last calculation, a black hole and a neutron star still look exactly the same. Our new calculation aims for a bit higher precision though. (For the experts: we’re at a higher order in spin.) The higher precision means that we can actually see the difference: our result changes for two colliding black holes versus two colliding grapefruits.

So does that mean I can tell you what happens when two neutron stars collide, according to our calculation? Actually, no. That’s not because we screwed up the calculation: it’s because some of the properties of neutron stars are unknown.

The Effective Field Theory of neutron stars has what we call “free parameters”, unknown variables. People have tried to estimate some of these (called “Love numbers” after the mathematician A. E. H. Love), but they depend on the details of how neutron stars work: what stuff they contain, how that stuff is shaped, and how it can move. To find them out, we probably can’t just calculate: we’ll have to measure, observe an actual neutron star collision and see what the numbers actually are.

That’s one of the purposes of gravitational wave telescopes. It’s not (as far as I know) something LIGO can measure. But future telescopes, with more precision, should be able to. By watching two colliding neutron stars and comparing to a high-precision calculation, physicists will better understand what those neutron stars are made of. In order to do that, they will need someone to do that high-precision calculation. And that’s why people like me are involved.

4gravitons Exchanges a Graviton

I had a new paper up last Friday with Michèle Levi and Andrew McLeod, on a topic I hadn’t worked on before: colliding black holes.

I am an “amplitudeologist”. I work on particle physics calculations, computing “scattering amplitudes” to find the probability that fundamental particles bounce off each other. This sounds like the farthest thing possible from black holes. Nevertheless, the two are tightly linked, through the magic of something called Effective Field Theory.

Effective Field Theory is a kind of “zoom knob” for particle physics. You “zoom out” to some chosen scale, and write down a theory that describes physics at that scale. Your theory won’t be a complete description: you’re ignoring everything that’s “too small to see”. It will, however, be an effective description: one that, at the scale you’re interested in, is effectively true.

Particle physicists usually use Effective Field Theory to go between different theories of particle physics, to zoom out from strings to quarks to protons and neutrons. But you can zoom out even further, all the way out to astronomical distances. Zoom out far enough, and even something as massive as a black hole looks like just another particle.

Just click the “zoom X10” button fifteen times, and you’re there!

In this picture, the force of gravity between black holes looks like particles (specifically, gravitons) going back and forth. With this picture, physicists can calculate what happens when two black holes collide with each other, making predictions that can be checked with new gravitational wave telescopes like LIGO.

Researchers have pushed this technique quite far. As the calculations get more and more precise (more and more “loops”), they have gotten more and more challenging. This is particularly true when the black holes are spinning, an extra wrinkle in the calculation that adds a surprising amount of complexity.

That’s where I came in. I can’t compete with the experts on black holes, but I certainly know a thing or two about complicated particle physics calculations. Amplitudeologists, like Andrew McLeod and me, have a grab-bag of tricks that make these kinds of calculations a lot easier. With Michèle Levi’s expertise working with spinning black holes in Effective Field Theory, we were able to combine our knowledge to push beyond the state of the art, to a new level of precision.

This project has been quite exciting for me, for a number of reasons. For one, it’s my first time working with gravitons: despite this blog’s name, I’d never published a paper on gravity before. For another, as my brother quipped when he heard about it, this is by far the most “applied” paper I’ve ever written. I mostly work with a theory called N=4 super Yang-Mills, a toy model we use to develop new techniques. This paper isn’t a toy model: the calculation we did should describe black holes out there in the sky, in the real world. There’s a decent chance someone will use this calculation to compare with actual data, from LIGO or a future telescope. That, in particular, is an absurdly exciting prospect.

Because this was such an applied calculation, it was an opportunity to explore the more applied part of my own field. We ended up using well-known techniques from that corner, but I look forward to doing something more inventive in future.

QCD Meets Gravity 2019

I’m at UCLA this week for QCD Meets Gravity, a conference about the surprising ways that gravity is “QCD squared”.

When I attended this conference two years ago, the community was branching out into a new direction: using tools from particle physics to understand the gravitational waves observed at LIGO.

At this year’s conference, gravitational waves have grown from a promising new direction to a large fraction of the talks. While there were still the usual talks about quantum field theory and string theory (everything from bootstrap methods to a surprising application of double field theory), gravitational waves have clearly become a major focus of this community.

This was highlighted before the first talk, when Zvi Bern brought up a recent paper by Thibault Damour. Bern and collaborators had recently used particle physics methods to push beyond the state of the art in gravitational wave calculations. Damour, an expert in the older methods, claims that Bern et al’s result is wrong, and in doing so also questions an earlier result by Amati, Ciafaloni, and Veneziano. More than that, Damour argued that the whole approach of using these kinds of particle physics tools for gravitational waves is misguided.

There was a lot of good-natured ribbing of Damour in the rest of the conference, as well as some serious attempts to confront his points. Damour’s argument so far is somewhat indirect, so there is hope that a more direct calculation (which Damour is currently pursuing) will resolve the matter. In the meantime, Julio Parra-Martinez described a reproduction of the older Amati/Ciafaloni/Veneziano result with more Damour-approved techniques, as well as additional indirect arguments that Bern et al got things right.

Before the QCD Meets Gravity community worked on gravitational waves, other groups had already built a strong track record in the area. One encouraging thing about this conference was how much the two communities are talking to each other. Several speakers came from the older community, and there were a lot of references in both groups’ talks to the other group’s work. This, more than even the content of the talks, felt like the strongest sign that something productive is happening here.

Many talks began by trying to motivate these gravitational calculations, usually to address the mysteries of astrophysics. Two talks were more direct, with Ramy Brustein and Pierre Vanhove speculating about new fundamental physics that could be uncovered by these calculations. I’m not the kind of physicist who does this kind of speculation, and I confess both talks struck me as rather strange. Vanhove in particular explicitly rejects the popular criterion of “naturalness”, making me wonder if his work is the kind of thing critics of naturalness have in mind.

Knowing When to Hold/Fold ‘Em in Science

The things one learns from Wikipedia. For example, today I learned that the country song “The Gambler” was selected for preservation by the US Library of Congress as being “culturally, historically, or artistically significant.”

You’ve got to know when to hold ’em, know when to fold ’em,

Know when to walk away, know when to run.

Knowing when to “hold ’em” or “fold ’em” is important in life in general, but it’s particularly important in science.

And not just on poker night

As scientists, we’re often trying to do something no-one else has done before. That’s exciting, but it’s risky too: sometimes whatever we’re trying simply doesn’t work. In those situations, it’s important to recognize when we aren’t making progress, and change tactics. The trick is, we can’t give up too early either: science is genuinely hard, and sometimes when we feel stuck we’re actually close to the finish line. Knowing which is which, when to “hold” and when to “fold”, is an essential skill, and a hard one to learn.

Sometimes, we can figure this out mathematically. Computational complexity theory classifies calculations by how difficult they are, including how long they take. If you can estimate how much time you should take to do a calculation, you can decide whether you’ll finish it in a reasonable amount of time. If you just want a rough guess, you can do a simpler version of the calculation, and see how long that takes, then estimate how much longer the full one will. If you figure out you’re doomed, then it’s time to switch to a more efficient algorithm, or a different question entirely.

Sometimes, we don’t just have to consider time, but money as well. If you’re doing an experiment, you have to estimate how much the equipment will cost, and how much it will cost to run it. Experimenters get pretty good at estimating these things, but they still screw up sometimes and run over budget. Occasionally this is fine: LIGO didn’t detect anything in its first eight-year run, but they upgraded the machines and tried again, and won a Nobel prize. Other times it’s a disaster, and money keeps being funneled into a project that never works. Telling the difference is crucial, and it’s something we as a community are still not so good at.

Sometimes we just have to follow our instincts. This is dangerous, because we have a bias (the “sunk cost fallacy”) to stick with something if we’ve already spent a lot of time or money on it. To counteract that, it’s good to cultivate a bias in the opposite direction, which you might call “scientific impatience”. Getting frustrated with slow progress may not seem productive, but it keeps you motivated to search for a better way. Experienced scientists get used to how long certain types of project take. Too little progress, and they look for another option. This can fail, killing a project that was going to succeed, but it can also prevent over-investment in a failing idea. Only a mix of instincts keeps the field moving.

In the end, science is a gamble. Like the song, we have to know when to hold ’em and fold ’em, when to walk away, and when to run an idea as far as it will go. Sometimes it works, and sometimes it doesn’t. That’s science.

Breakthrough Prize for Supergravity

This week, $3 Million was awarded by the Breakthrough Prize to Sergio Ferrara, Daniel Z. Freedman and Peter van Nieuwenhuizen, the discoverers of the theory of supergravity, part of a special award separate from their yearly Fundamental Physics Prize. There’s a nice interview with Peter van Nieuwenhuizen on the Stony Brook University website, about his reaction to the award.

The Breakthrough Prize was designed to complement the Nobel Prize, rewarding deserving researchers who wouldn’t otherwise get the Nobel. The Nobel Prize is only awarded to theoretical physicists when they predict something that is later observed in an experiment. Many theorists are instead renowned for their mathematical inventions, concepts that other theorists build on and use but that do not by themselves make testable predictions. The Breakthrough Prize celebrates these theorists, and while it has also been awarded to others who the Nobel committee could not or did not recognize (various large experimental collaborations, Jocelyn Bell Burnell), this has always been the physics prize’s primary focus.

The Breakthrough Prize website describes supergravity as a theory that combines gravity with particle physics. That’s a bit misleading: while the theory does treat gravity in a “particle physics” way, unlike string theory it doesn’t solve the famous problems with combining quantum mechanics and gravity. (At least, as far as we know.)

It’s better to say that supergravity is a theory that links gravity to other parts of particle physics, via supersymmetry. Supersymmetry is a relationship between two types of particles: bosons, like photons, gravitons, or the Higgs, and fermions, like electrons or quarks. In supersymmetry, each type of boson has a fermion “partner”, and vice versa. In supergravity, gravity itself gets a partner, called the gravitino. Supersymmetry links the properties of particles and their partners together: both must have the same mass and the same charge. In a sense, it can unify different types of particles, explaining both under the same set of rules.

In the real world, we don’t see bosons and fermions with the same mass and charge. If gravitinos exist, then supersymmetry would have to be “broken”, giving them a high mass that makes them hard to find. Some hoped that the Large Hadron Collider could find these particles, but now it looks like it won’t, so there is no evidence for supergravity at the moment.

Instead, supergravity’s success has been as a tool to understand other theories of gravity. When the theory was proposed in the 1970’s, it was thought of as a rival to string theory. Instead, over the years it consistently managed to point out aspects of string theory that the string theorists themselves had missed, for example noticing that the theory needed not just strings but higher-dimensional objects called “branes”. Now, supergravity is understood as one part of a broader string theory picture.

In my corner of physics, we try to find shortcuts for complicated calculations. We benefit a lot from toy models: simpler, unrealistic theories that let us test our ideas before applying them to the real world. Supergravity is one of the best toy models we’ve got, a theory that makes gravity simple enough that we can start to make progress. Right now, colleagues of mine are developing new techniques for calculations at LIGO, the gravitational wave telescope. If they hadn’t worked with supergravity first, they would never have discovered these techniques.

The discovery of supergravity by Ferrara, Freedman, and van Nieuwenhuizen is exactly the kind of work the Breakthrough Prize was created to reward. Supergravity is a theory with deep mathematics, rich structure, and wide applicability. There is of course no guarantee that such a theory describes the real world. What is guaranteed, though, is that someone will find it useful.

Amplitudes 2019

It’s that time of year again, and I’m at Amplitudes, my field’s big yearly conference. This year we’re in Dublin, hosted by Trinity.

Which also hosts the Book of Kells, and the occasional conference reception just down the hall from the Book of Kells

Increasingly, the organizers of Amplitudes have been setting aside a few slots for talks from people in other fields. This year the “closest” such speaker was Kirill Melnikov, who pointed out some of the hurdles that make it difficult to have useful calculations to compare to the LHC. Many of these hurdles aren’t things that amplitudes-people have traditionally worked on, but are still things that might benefit from our particular expertise. Another such speaker, Maxwell Hansen, is from a field called Lattice QCD. While amplitudeologists typically compute with approximations, order by order in more and more complicated diagrams, Lattice QCD instead simulates particle physics on supercomputers, chopping up their calculations on a grid. This allows them to study much stronger forces, including the messy interactions of quarks inside protons, but they have a harder time with the situations we’re best at, where two particles collide from far away. Apparently, though, they are making progress on that kind of calculation, with some clever tricks to connect it to calculations they know how to do. While I was a bit worried that this would let them fire all the amplitudeologists and replace us with supercomputers, they’re not quite there yet, nonetheless they are doing better than I would have expected. Other speakers from other fields included Leron Borsten, who has been applying the amplitudes concept of the “double copy” to M theory and Andrew Tolley, who uses the kind of “positivity” properties that amplitudeologists find interesting to restrict the kinds of theories used in cosmology.

The biggest set of “non-traditional-amplitudes” talks focused on using amplitudes techniques to calculate the behavior not of particles but of black holes, to predict the gravitational wave patterns detected by LIGO. This year featured a record six talks on the topic, a sixth of the conference. Last year I commented that the research ideas from amplitudeologists on gravitational waves had gotten more robust, with clearer proposals for how to move forward. This year things have developed even further, with several initial results. Even more encouragingly, while there are several groups doing different things they appear to be genuinely listening to each other: there were plenty of references in the talks both to other amplitudes groups and to work by more traditional gravitational physicists. There’s definitely still plenty of lingering confusion that needs to be cleared up, but it looks like the community is robust enough to work through it.

I’m still busy with the conference, but I’ll say more when I’m back next week. Stay tuned for square roots, clusters, and Nima’s travel schedule. And if you’re a regular reader, please fill out last week’s poll if you haven’t already!

Amplitudes in String and Field Theory at NBI

There’s a conference at the Niels Bohr Institute this week, on Amplitudes in String and Field Theory. Like the conference a few weeks back, this one was funded by the Simons Foundation, as part of Michael Green’s visit here.

The first day featured a two-part talk by Michael Green and Congkao Wen. They are looking at the corrections that string theory adds on top of theories of supergravity. These corrections are difficult to calculate directly from string theory, but one can figure out a lot about them from the kinds of symmetry and duality properties they need to have, using the mathematics of modular forms. While Michael’s talk introduced the topic with a discussion of older work, Congkao talked about their recent progress looking at this from an amplitudes perspective.

Francesca Ferrari’s talk on Tuesday also related to modular forms, while Oliver Schlotterer and Pierre Vanhove talked about a different corner of mathematics, single-valued polylogarithms. These single-valued polylogarithms are of interest to string theorists because they seem to connect two parts of string theory: the open strings that describe Yang-Mills forces and the closed strings that describe gravity. In particular, it looks like you can take a calculation in open string theory and just replace numbers and polylogarithms with their “single-valued counterparts” to get the same calculation in closed string theory. Interestingly, there is more than one way that mathematicians can define “single-valued counterparts”, but only one such definition, the one due to Francis Brown, seems to make this trick work. When I asked Pierre about this he quipped it was because “Francis Brown has good taste…either that, or String Theory has good taste.”

Wednesday saw several talks exploring interesting features of string theory. Nathan Berkovitz discussed his new paper, which makes a certain context of AdS/CFT (a duality between string theory in certain curved spaces and field theory on the boundary of those spaces) manifest particularly nicely. By writing string theory in five-dimensional AdS space in the right way, he can show that if the AdS space is small it will generate the same Feynman diagrams that one would use to do calculations in N=4 super Yang-Mills. In the afternoon, Sameer Murthy showed how localization techniques can be used in gravity theories, including to calculate the entropy of black holes in string theory, while Yvonne Geyer talked about how to combine the string theory-like CHY method for calculating amplitudes with supersymmetry, especially in higher dimensions where the relevant mathematics gets tricky.

Thursday ended up focused on field theory. Carlos Mafra was originally going to speak but he wasn’t feeling well, so instead I gave a talk about the “tardigrade” integrals I’ve been looking at. Zvi Bern talked about his work applying amplitudes techniques to make predictions for LIGO. This subject has advanced a lot in the last few years, and now Zvi and collaborators have finally done a calculation beyond what others had been able to do with older methods. They still have a way to go before they beat the traditional methods overall, but they’re off to a great start. Lance Dixon talked about two-loop five-particle non-planar amplitudes in N=4 super Yang-Mills and N=8 supergravity. These are quite a bit trickier than the planar amplitudes I’ve worked on with him in the past, in particular it’s not yet possible to do this just by guessing the answer without considering Feynman diagrams.

Today was the last day of the conference, and the emphasis was on number theory. David Broadhurst described some interesting contributions from physics to mathematics, in particular emphasizing information that the Weierstrass formulation of elliptic curves omits. Eric D’Hoker discussed how the concept of transcendentality, previously used in field theory, could be applied to string theory. A few of his speculations seemed a bit farfetched (in particular, his setup needs to treat certain rational numbers as if they were transcendental), but after his talk I’m a bit more optimistic that there could be something useful there.

Amplitudes 2018

This week, I’m at Amplitudes, my field’s big yearly conference. The conference is at SLAC National Accelerator Laboratory this year, a familiar and lovely place.


Welcome to the Guest House California

It’s been a packed conference, with a lot of interesting talks. Recording and slides of most of them should be up at this point, for those following at home. I’ll comment on a few that caught my attention, I might do a more in-depth post later.

The first morning was dedicated to gravitational waves. At the QCD Meets Gravity conference last December I noted that amplitudes folks were very eager to do something relevant to LIGO, but that it was still a bit unclear how we could contribute (aside from Pierpaolo Mastrolia, who had already figured it out). The following six months appear to have cleared things up considerably, and Clifford Cheung and Donal O’Connel’s talks laid out quite concrete directions for this kind of research.

I’d seen Erik Panzer talk about the Hepp bound two weeks ago at Les Houches, but that was for a much more mathematically-inclined audience. It’s been interesting seeing people here start to see the implications: a simple method to classify and estimate (within 1%!) Feynman integrals could be a real game-changer.

Brenda Penante’s talk made me rethink a slogan I like to quote, that N=4 super Yang-Mills is the “most transcendental” part of QCD. While this is true in some cases, in many ways it’s actually least true for amplitudes, with quite a few counterexamples. For other quantities (like the form factors that were the subject of her talk) it’s true more often, and it’s still unclear when we should expect it to hold, or why.

Nima Arkani-Hamed has a reputation for talks that end up much longer than scheduled. Lately, it seems to be due to the sheer number of projects he’s working on. He had to rush at the end of his talk, which would have been about cosmological polytopes. I’ll have to ask his collaborator Paolo Benincasa for an update when I get back to Copenhagen.

Tuesday afternoon was a series of talks on the “NNLO frontier”, two-loop calculations that form the state of the art for realistic collider physics predictions. These talks brought home to me that the LHC really does need two-loop precision, and that the methods to get it are still pretty cumbersome. For those of us off in the airy land of six-loop N=4 super Yang-Mills, this is the challenge: can we make what these people do simpler?

Wednesday cleared up a few things for me, from what kinds of things you can write down in “fishnet theory” to how broad Ashoke Sen’s soft theorem is, to how fast John Joseph Carrasco could show his villanelle slide. It also gave me a clearer idea of just what simplifications are available for pushing to higher loops in supergravity.

Wednesday was also the poster session. It keeps being amazing how fast the field is growing, the sheer number of new faces was quite inspiring. One of those new faces pointed me to a paper I had missed, suggesting that elliptic integrals could end up trickier than most of us had thought.

Thursday featured two talks by people who work on the Conformal Bootstrap, one of our subfield’s closest relatives. (We’re both “bootstrappers” in some sense.) The talks were interesting, but there wasn’t a lot of engagement from the audience, so if the intent was to make a bridge between the subfields I’m not sure it panned out. Overall, I think we’re mostly just united by how we feel about Simon Caron-Huot, who David Simmons-Duffin described as “awesome and mysterious”. We also had an update on attempts to extend the Pentagon OPE to ABJM, a three-dimensional analogue of N=4 super Yang-Mills.

I’m looking forward to Friday’s talks, promising elliptic functions among other interesting problems.

At the GGI Lectures on the Theory of Fundamental Interactions

I’m at the Galileo Galilei Institute for Theoretical Physics in Florence at their winter school, the GGI Lectures on the Theory of Fundamental Interactions. Next week I’ll be helping Lance Dixon teach Amplitudeology, this week, I’m catching the tail end of Ira Rothstein’s lectures.


The Galileo Galilei Institute, at the end of a long, winding road filled with small, speedy cars and motorcycles, in classic Italian fashion

Rothstein has been heavily involved in doing gravitational wave calculations using tools from quantum field theory, something that has recently captured a lot of interest from amplitudes people. Specifically, he uses Effective Field Theory, theories that are “effectively” true at some scale but hide away higher-energy physics. In the case of gravitational waves, these theories are a powerful way to calculate the waves that LIGO and VIRGO can observe without using the full machinery of general relativity.

After seeing Rothstein’s lectures, I’m reminded of something he pointed out at the QCD Meets Gravity conference in December. He emphasized then that even if amplitudes people get very good at drawing diagrams for classical general relativity, that won’t be the whole story: there’s a series of corrections needed to “match” between the quantities LIGO is able to see and the ones we’re able to calculate. Different methods incorporate these corrections in different ways, and the most intuitive approach for us amplitudes folks may still end up cumbersome once all the corrections are included. In typical amplitudes fashion, this just makes me wonder if there’s a shortcut: some way to compute, not just a piece that gets plugged in to an Effective Field Theory story, but the waves LIGO sees in one fell swoop (or at least, the part where gravity is weak enough that our methods are still useful). That’s probably a bit naive of me, though.