Tag Archives: amplitudes

Calculating the Hard Way, for Science!

I had a new paper out last week, with Jacob Bourjaily and Matthias Volk. We’re calculating the probability that particles bounce off each other in our favorite toy model, N=4 super Yang-Mills. And this time, we’re doing it the hard way.

The “easy way” we didn’t take is one I have a lot of experience with. Almost as long as I’ve been writing this blog, I’ve been calculating these particle probabilities by “guesswork”: starting with a plausible answer, then honing it down until I can be confident it’s right. This might sound reckless, but it works remarkably well, letting us calculate things we could never have hoped for with other methods. The catch is that “guessing” is much easier when we know what we’re looking for: in particular, it works much better in toy models than in the real world.

Over the last few years, though, I’ve been using a much more “normal” method, one that so far has a better track record in the real world. This method, too, works better than you would expect, and we’ve managed some quite complicated calculations.

So we have an “easy way”, and a “hard way”. Which one is better? Is the hard way actually harder?

To test that, you need to do the same calculation both ways, and see which is easier. You want it to be a fair test: if “guessing” only works in the toy model, then you should do the “hard” version in the toy model as well. And you don’t want to give “guessing” any unfair advantages. In particular, the “guess” method works best when we know a lot about the result we’re looking for: what it’s made of, what symmetries it has. In order to do a fair test, we must use that knowledge to its fullest to improve the “hard way” as well.

We picked an example in the middle: not too easy, and not too hard, a calculation that was done a few years back “the easy way” but not yet done “the hard way”. We plugged in all the modern tricks we could, trying to use as much of what we knew as possible. We trained a grad student: Matthias Volk, who did the lion’s share of the calculation and learned a lot in the process. We worked through the calculation, and did it properly the hard way.

Which method won?

In the end, the hard way was indeed harder…but not by that much! Most of the calculation went quite smoothly, with only a few difficulties at the end. Just five years ago, when the calculation was done “the easy way”, I doubt anyone would have expected the hard way to be viable. But with modern tricks it wasn’t actually that hard.

This is encouraging. It tells us that the “hard way” has potential, that it’s almost good enough to compete at this kind of calculation. It tells us that the “easy way” is still quite powerful. And it reminds us that the more we know, and the more we apply our knowledge, the more we can do.

QCD Meets Gravity 2019

I’m at UCLA this week for QCD Meets Gravity, a conference about the surprising ways that gravity is “QCD squared”.

When I attended this conference two years ago, the community was branching out into a new direction: using tools from particle physics to understand the gravitational waves observed at LIGO.

At this year’s conference, gravitational waves have grown from a promising new direction to a large fraction of the talks. While there were still the usual talks about quantum field theory and string theory (everything from bootstrap methods to a surprising application of double field theory), gravitational waves have clearly become a major focus of this community.

This was highlighted before the first talk, when Zvi Bern brought up a recent paper by Thibault Damour. Bern and collaborators had recently used particle physics methods to push beyond the state of the art in gravitational wave calculations. Damour, an expert in the older methods, claims that Bern et al’s result is wrong, and in doing so also questions an earlier result by Amati, Ciafaloni, and Veneziano. More than that, Damour argued that the whole approach of using these kinds of particle physics tools for gravitational waves is misguided.

There was a lot of good-natured ribbing of Damour in the rest of the conference, as well as some serious attempts to confront his points. Damour’s argument so far is somewhat indirect, so there is hope that a more direct calculation (which Damour is currently pursuing) will resolve the matter. In the meantime, Julio Parra-Martinez described a reproduction of the older Amati/Ciafaloni/Veneziano result with more Damour-approved techniques, as well as additional indirect arguments that Bern et al got things right.

Before the QCD Meets Gravity community worked on gravitational waves, other groups had already built a strong track record in the area. One encouraging thing about this conference was how much the two communities are talking to each other. Several speakers came from the older community, and there were a lot of references in both groups’ talks to the other group’s work. This, more than even the content of the talks, felt like the strongest sign that something productive is happening here.

Many talks began by trying to motivate these gravitational calculations, usually to address the mysteries of astrophysics. Two talks were more direct, with Ramy Brustein and Pierre Vanhove speculating about new fundamental physics that could be uncovered by these calculations. I’m not the kind of physicist who does this kind of speculation, and I confess both talks struck me as rather strange. Vanhove in particular explicitly rejects the popular criterion of “naturalness”, making me wonder if his work is the kind of thing critics of naturalness have in mind.

Rooting out the Answer

I have a new paper out today, with Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, Cristian Vergu and Matthias Volk.

There’s a story I’ve told before on this blog, about a kind of “alphabet” for particle physics predictions. When we try to make a prediction in particle physics, we need to do complicated integrals. Sometimes, these integrals simplify dramatically, in unexpected ways. It turns out we can understand these simplifications by writing the integrals in a sort of “alphabet”, breaking complicated mathematical “periods” into familiar logarithms. If we want to simplify an integral, we can use relations between logarithms like these:

\log(a b)=\log(a)+\log(b),\quad \log(a^n)=n\log(a)

to factor our “alphabet” into pieces as simple as possible.

The simpler the alphabet, the more progress you can make. And in the nice toy model theory we’re working with, the alphabets so far have been simple in one key way. Expressed in the right variables, they’re rational. For example, they contain no square roots.

Would that keep going? Would we keep finding rational alphabets? Or might the alphabets, instead, have square roots?

After some searching, we found a clean test case. There was a calculation we could do with just two Feynman diagrams. All we had to do was subtract one from the other. If they still had square roots in their alphabet, we’d have proven that the nice, rational alphabets eventually had to stop.

Easy-peasy

So we calculated these diagrams, doing the complicated integrals. And we found they did indeed have square roots in their alphabet, in fact many more than expected. They even had square roots of square roots!

You’d think that would be the end of the story. But square roots are trickier than you’d expect.

Remember that to simplify these integrals, we break them up into an alphabet, and factor the alphabet. What happens when we try to do that with an alphabet that has square roots?

Suppose we have letters in our alphabet with \sqrt{-5}. Suppose another letter is the number 9. You might want to factor it like this:

9=3\times 3

Simple, right? But what if instead you did this:

9=(2+ \sqrt{-5} )\times(2- \sqrt{-5} )

Once you allow \sqrt{-5} in the game, you can factor 9 in two different ways. The central assumption, that you can always just factor your alphabet, breaks down. In mathematical terms, you no longer have a unique factorization domain.

Instead, we had to get a lot more mathematically sophisticated, factoring into something called prime ideals. We got that working and started crunching through the square roots in our alphabet. Things simplified beautifully: we started with a result that was ten million terms long, and reduced it to just five thousand. And at the end of the day, after subtracting one integral from the other…

We found no square roots!

After all of our simplifications, all the letters we found were rational. Our nice test case turned out much, much simpler than we expected.

It’s been a long road on this calculation, with a lot of false starts. We were kind of hoping to be the first to find square root letters in these alphabets; instead it looks like another group will beat us to the punch. But we developed a lot of interesting tricks along the way, and we thought it would be good to publish our “null result”. As always in our field, sometimes surprising simplifications are just around the corner.

Calabi-Yaus in Feynman Diagrams: Harder and Easier Than Expected

I’ve got a new paper up, about the weird geometrical spaces we keep finding in Feynman diagrams.

With Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, and most recently Cristian Vergu and Matthias Volk, I’ve been digging up odd mathematics in particle physics calculations. In several calculations, we’ve found that we need a type of space called a Calabi-Yau manifold. These spaces are often studied by string theorists, who hope they represent how “extra” dimensions of space are curled up. String theorists have found an absurdly large number of Calabi-Yau manifolds, so many that some are trying to sift through them with machine learning. We wanted to know if our situation was quite that ridiculous: how many Calabi-Yaus do we really need?

So we started asking around, trying to figure out how to classify our catch of Calabi-Yaus. And mostly, we just got confused.

It turns out there are a lot of different tools out there for understanding Calabi-Yaus, and most of them aren’t all that useful for what we’re doing. We went in circles for a while trying to understand how to desingularize toric varieties, and other things that will sound like gibberish to most of you. In the end, though, we noticed one small thing that made our lives a whole lot simpler.

It turns out that all of the Calabi-Yaus we’ve found are, in some sense, the same. While the details of the physics varies, the overall “space” is the same in each case. It’s a space we kept finding for our “Calabi-Yau bestiary”, but it turns out one of the “traintrack” diagrams we found earlier can be written in the same way. We found another example too, a “wheel” that seems to be the same type of Calabi-Yau.

And that actually has a sensible name

We also found many examples that we don’t understand. Add another rung to our “traintrack” and we suddenly can’t write it in the same space. (Personally, I’m quite confused about this one.) Add another spoke to our wheel and we confuse ourselves in a different way.

So while our calculation turned out simpler than expected, we don’t think this is the full story. Our Calabi-Yaus might live in “the same space”, but there are also physics-related differences between them, and these we still don’t understand.

At some point, our abstract included the phrase “this paper raises more questions than it answers”. It doesn’t say that now, but it’s still true. We wrote this paper because, after getting very confused, we ended up able to say a few new things that hadn’t been said before. But the questions we raise are if anything more important. We want to inspire new interest in this field, toss out new examples, and get people thinking harder about the geometry of Feynman integrals.

Congratulations to Simon Caron-Huot and Pedro Vieira for the New Horizons Prize!

The 2020 Breakthrough Prizes were announced last week, awards in physics, mathematics, and life sciences. The physics prize was awarded to the Event Horizon Telescope, with the $3 million award to be split among the 347 members of the collaboration. The Breakthrough Prize Foundation also announced this year’s New Horizons prizes, six smaller awards of $100,000 each to younger researchers in physics and math. One of those awards went to two people I know, Simon Caron-Huot and Pedro Vieira. Extremely specialized as I am, I hope no-one minds if I ignore all the other awards and talk about them.

The award for Caron-Huot and Vieira is “For profound contributions to the understanding of quantum field theory.” Indeed, both Simon and Pedro have built their reputations as explorers of quantum field theories, the kind of theories we use in particle physics. Both have found surprising behavior in these theories, where a theory people thought they understood did something quite unexpected. Both also developed new calculation methods, using these theories to compute things that were thought to be out of reach. But this is all rather vague, so let me be a bit more specific about each of them:

Simon Caron-Huot is known for his penetrating and mysterious insight. He has the ability to take a problem and think about it in a totally original way, coming up with a solution that no-one else could have thought of. When I first worked with him, he took a calculation that the rest of us would have taken a month to do and did it by himself in a week. His insight seems to come in part from familiarity with the physics literature, forgotten papers from the 60’s and 70’s that turn out surprisingly useful today. Largely, though, his insight is his own, an inimitable style that few can anticipate. His interests are broad, from exotic toy models to well-tested theories that describe the real world, covering a wide range of methods and approaches. Physicists tend to describe each other in terms of standard “virtues”: depth and breadth, knowledge and originality. Simon somehow seems to embody all of them.

Pedro Vieira is mostly known for his work with integrable theories. These are theories where if one knows the right trick one can “solve” the theory exactly, rather than using the approximations that physicists often rely on. Pedro was a mentor to me when I was a postdoc at the Perimeter Institute, and one thing he taught me was to always expect more. When calculating with computer code I would wait hours for a result, while Pedro would ask “why should it take hours?”, and if we couldn’t propose a reason would insist we find a quicker way. This attitude paid off in his research, where he has used integrable theories to calculate things others would have thought out of reach. His Pentagon Operator Product Expansion, or “POPE”, uses these tricks to calculate probabilities that particles collide, and more recently he pushed further to other calculations with a hexagon-based approach (which one might call the “HOPE”). Now he’s working on “bootstrapping” up complicated theories from simple physical principles, once again asking “why should this be hard?”

At Aspen

I’m at the Aspen Center for Physics this week, for a workshop on Scattering Amplitudes and the Conformal Bootstrap.

A place even greener than its ubiquitous compost bins

Aspen is part of a long and illustrious tradition of physics conference sites located next to ski resorts. It’s ten years younger than its closest European counterpart Les Houches School of Physics, but if anything its traditions are stricter: all blackboard talks, and a minimum two-week visit. Instead of the summer schools of Les Houches, Aspen’s goal is to inspire collaboration: to get physicists to spend time working and hiking around each other until inspiration strikes.

This workshop is a meeting between two communities: people who study the Conformal Bootstrap (nice popular description here) and my own field of Scattering Amplitudes. The Conformal Boostrap is one of our closest sister-fields, so there may be a lot of potential for collaboration. This week’s talks have been amplitudes-focused, I’m looking forward to the talks next week that will highlight connections between the two fields.

Breakthrough Prize for Supergravity

This week, $3 Million was awarded by the Breakthrough Prize to Sergio Ferrara, Daniel Z. Freedman and Peter van Nieuwenhuizen, the discoverers of the theory of supergravity, part of a special award separate from their yearly Fundamental Physics Prize. There’s a nice interview with Peter van Nieuwenhuizen on the Stony Brook University website, about his reaction to the award.

The Breakthrough Prize was designed to complement the Nobel Prize, rewarding deserving researchers who wouldn’t otherwise get the Nobel. The Nobel Prize is only awarded to theoretical physicists when they predict something that is later observed in an experiment. Many theorists are instead renowned for their mathematical inventions, concepts that other theorists build on and use but that do not by themselves make testable predictions. The Breakthrough Prize celebrates these theorists, and while it has also been awarded to others who the Nobel committee could not or did not recognize (various large experimental collaborations, Jocelyn Bell Burnell), this has always been the physics prize’s primary focus.

The Breakthrough Prize website describes supergravity as a theory that combines gravity with particle physics. That’s a bit misleading: while the theory does treat gravity in a “particle physics” way, unlike string theory it doesn’t solve the famous problems with combining quantum mechanics and gravity. (At least, as far as we know.)

It’s better to say that supergravity is a theory that links gravity to other parts of particle physics, via supersymmetry. Supersymmetry is a relationship between two types of particles: bosons, like photons, gravitons, or the Higgs, and fermions, like electrons or quarks. In supersymmetry, each type of boson has a fermion “partner”, and vice versa. In supergravity, gravity itself gets a partner, called the gravitino. Supersymmetry links the properties of particles and their partners together: both must have the same mass and the same charge. In a sense, it can unify different types of particles, explaining both under the same set of rules.

In the real world, we don’t see bosons and fermions with the same mass and charge. If gravitinos exist, then supersymmetry would have to be “broken”, giving them a high mass that makes them hard to find. Some hoped that the Large Hadron Collider could find these particles, but now it looks like it won’t, so there is no evidence for supergravity at the moment.

Instead, supergravity’s success has been as a tool to understand other theories of gravity. When the theory was proposed in the 1970’s, it was thought of as a rival to string theory. Instead, over the years it consistently managed to point out aspects of string theory that the string theorists themselves had missed, for example noticing that the theory needed not just strings but higher-dimensional objects called “branes”. Now, supergravity is understood as one part of a broader string theory picture.

In my corner of physics, we try to find shortcuts for complicated calculations. We benefit a lot from toy models: simpler, unrealistic theories that let us test our ideas before applying them to the real world. Supergravity is one of the best toy models we’ve got, a theory that makes gravity simple enough that we can start to make progress. Right now, colleagues of mine are developing new techniques for calculations at LIGO, the gravitational wave telescope. If they hadn’t worked with supergravity first, they would never have discovered these techniques.

The discovery of supergravity by Ferrara, Freedman, and van Nieuwenhuizen is exactly the kind of work the Breakthrough Prize was created to reward. Supergravity is a theory with deep mathematics, rich structure, and wide applicability. There is of course no guarantee that such a theory describes the real world. What is guaranteed, though, is that someone will find it useful.

Reader Background Poll Reflections

A few weeks back I posted a poll, asking you guys what sort of physics background you have. The idea was to follow up on a poll I did back in 2015, to see how this blog’s audience has changed.

One thing that immediately leaped out of the data was how many of you are physicists. As of writing this, 66% of readers say they either have a PhD in physics or a related field, or are currently in grad school. This includes 7% specifically from my sub-field, “amplitudeology” (though this number may be higher than usual since we just had our yearly conference, and more amplitudeologists were reminded my blog exists).

I didn’t use the same categories in 2015, so the numbers can’t be easily compared. In 2015 only 2.5% of readers described themselves as amplitudeologists. Adding these up with the physics PhDs and grad students gives 59%, which goes up to 64.5% if I include the mathematicians (who this year might have put either “PhD in a related field” or “Other Academic”). So overall the percentages are pretty similar, though now it looks like more of my readers are grad students.

Despite the small difference, I am a bit worried: it looks like I’m losing non-physicist readers. I could flatter myself and think that I inspired those non-physicists to go to grad school, but more realistically I should admit that fewer of my posts have been interesting to a non-physics audience. In 2015 I worked at the Perimeter Institute, and helped out with their public lectures. Now I’m at the Niels Bohr Institute, and I get fewer opportunities to hear questions from non-physicists. I get fewer ideas for interesting questions to answer.

I want to keep this blog’s language accessible and its audience general. I appreciate that physicists like this blog and view it as a resource, but I don’t want it to turn into a blog for physicists only. I’d like to encourage the non-physicists in the audience: ask questions! Don’t worry if it sounds naive, or if the question seems easy: if you’re confused, likely others are too.

Amplitudes 2019 Retrospective

I’m back from Amplitudes 2019, and since I have more time I figured I’d write down a few more impressions.

Amplitudes runs all the way from practical LHC calculations to almost pure mathematics, and this conference had plenty of both as well as everything in between. On the more practical side a standard “pipeline” has developed: get a large number of integrals from generalized unitarity, reduce them to a more manageable number with integration-by-parts, and then compute them with differential equations. Vladimir Smirnov and Johannes Henn presented the state of the art in this pipeline, challenging QCD calculations that required powerful methods. Others aimed to replace various parts of the pipeline. Integration-by-parts could be avoided in the numerical unitarity approach discussed by Ben Page, or alternatively with the intersection theory techniques showcased by Pierpaolo Mastrolia. More radical departures included Stefan Weinzierl’s refinement of loop-tree duality, and Jacob Bourjaily’s advocacy of prescriptive unitarity. Robert Schabinger even brought up direct integration, though I mostly viewed his talk as an independent confirmation of the usefulness of Erik Panzer’s thesis. It also showcased an interesting integral that had previously been represented by Lorenzo Tancredi and collaborators as elliptic, but turned out to be writable in terms of more familiar functions. It’s going to be interesting to see whether other such integrals arise, and whether they can be spotted in advance.

On the other end of the scale, Francis Brown was the only speaker deep enough in the culture of mathematics to insist on doing a blackboard talk. Since the conference hall didn’t actually have a blackboard, this was accomplished by projecting video of a piece of paper that he wrote on as the talk progressed. Despite the awkward setup, the talk was impressively clear, though there were enough questions that he ran out of time at the end and had to “cheat” by just projecting his notes instead. He presented a few theorems about the sort of integrals that show up in string theory. Federico Zerbini and Eduardo Casali’s talks covered similar topics, with the latter also involving intersection theory. Intersection theory also appeared in a poster from grad student Andrzej Pokraka, which overall is a pretty impressively broad showing for a part of mathematics that Sebastian Mizera first introduced to the amplitudes community less than two years ago.

Nima Arkani-Hamed’s talk on Wednesday fell somewhere in between. A series of airline mishaps brought him there only a few hours before his talk, and his own busy schedule sent him back to the airport right after the last question. The talk itself covered several topics, tied together a bit better than usual by a nice account in the beginning of what might motivate a “polytope picture” of quantum field theory. One particularly interesting aspect was a suggestion of a space, smaller than the amplituhedron, that might more accuractly the describe the “alphabet” that appears in N=4 super Yang-Mills amplitudes. If his proposal works, it may be that the infinite alphabet we were worried about for eight-particle amplitudes is actually finite. Ömer Gürdoğan’s talk mentioned this, and drew out some implications. Overall, I’m still unclear as to what this story says about whether the alphabet contains square roots, but that’s a topic for another day. My talk was right after Nima’s, and while he went over-time as always I compensated by accidentally going under-time. Overall, I think folks had fun regardless.

Though I don’t know how many people recognized this guy

Amplitudes 2019

It’s that time of year again, and I’m at Amplitudes, my field’s big yearly conference. This year we’re in Dublin, hosted by Trinity.

Which also hosts the Book of Kells, and the occasional conference reception just down the hall from the Book of Kells

Increasingly, the organizers of Amplitudes have been setting aside a few slots for talks from people in other fields. This year the “closest” such speaker was Kirill Melnikov, who pointed out some of the hurdles that make it difficult to have useful calculations to compare to the LHC. Many of these hurdles aren’t things that amplitudes-people have traditionally worked on, but are still things that might benefit from our particular expertise. Another such speaker, Maxwell Hansen, is from a field called Lattice QCD. While amplitudeologists typically compute with approximations, order by order in more and more complicated diagrams, Lattice QCD instead simulates particle physics on supercomputers, chopping up their calculations on a grid. This allows them to study much stronger forces, including the messy interactions of quarks inside protons, but they have a harder time with the situations we’re best at, where two particles collide from far away. Apparently, though, they are making progress on that kind of calculation, with some clever tricks to connect it to calculations they know how to do. While I was a bit worried that this would let them fire all the amplitudeologists and replace us with supercomputers, they’re not quite there yet, nonetheless they are doing better than I would have expected. Other speakers from other fields included Leron Borsten, who has been applying the amplitudes concept of the “double copy” to M theory and Andrew Tolley, who uses the kind of “positivity” properties that amplitudeologists find interesting to restrict the kinds of theories used in cosmology.

The biggest set of “non-traditional-amplitudes” talks focused on using amplitudes techniques to calculate the behavior not of particles but of black holes, to predict the gravitational wave patterns detected by LIGO. This year featured a record six talks on the topic, a sixth of the conference. Last year I commented that the research ideas from amplitudeologists on gravitational waves had gotten more robust, with clearer proposals for how to move forward. This year things have developed even further, with several initial results. Even more encouragingly, while there are several groups doing different things they appear to be genuinely listening to each other: there were plenty of references in the talks both to other amplitudes groups and to work by more traditional gravitational physicists. There’s definitely still plenty of lingering confusion that needs to be cleared up, but it looks like the community is robust enough to work through it.

I’m still busy with the conference, but I’ll say more when I’m back next week. Stay tuned for square roots, clusters, and Nima’s travel schedule. And if you’re a regular reader, please fill out last week’s poll if you haven’t already!