Tag Archives: amplitudes

In Uppsala for Elliptics 2021

I’m in Uppsala in Sweden this week, at an actual in-person conference.

With actual blackboards!

Elliptics started out as a series of small meetings of physicists trying to understand how to make sense of elliptic integrals in calculations of colliding particles. It grew into a full-fledged yearly conference series. I organized last year, which naturally was an online conference. This year though, the stage was set for Uppsala University to host in person.

I should say mostly in person. It’s a hybrid conference, with some speakers and attendees joining on Zoom. Some couldn’t make it because of travel restrictions, or just wanted to be cautious about COVID. But seemingly just as many had other reasons, like teaching schedules or just long distances, that kept them from coming in person. We’re all wondering if this will become a long-term trend, where the flexibility of hybrid conferences lets people attend no matter their constraints.

The hybrid format worked better than expected, but there were still a few kinks. The audio was particularly tricky, it seemed like each day the organizers needed a new microphone setup to take questions. It’s always a little harder to understand someone on Zoom, especially when you’re sitting in an auditorium rather than focused on your own screen. Still, technological experience should make this work better in future.

Content-wise, the conference began with a “mini-school” of pedagogical talks on particle physics, string theory, and mathematics. I found the mathematical talks by Erik Panzer particularly nice, it’s a topic I still feel quite weak on and he laid everything out in a very clear way. It seemed like a nice touch to include a “school” element in the conference, though I worry it ate too much into the time.

The rest of the content skewed more mathematical, and more string-theoretic, than these conferences have in the past. The mathematical content ranged from intriguing (including an interesting window into what it takes to get high-quality numerics) to intimidatingly obscure (large commutative diagrams, category theory on the first slide). String theory was arguably under-covered in prior years, but it felt over-covered this year. With the particle physics talks focusing on either general properties with perhaps some connections to elliptics, or to N=4 super Yang-Mills, it felt like we were missing the more “practical” talks from past conferences, where someone was computing something concrete in QCD and told us what they needed. Next year is in Mainz, so maybe those talks will reappear.

Stop Listing the Amplituhedron as a Competitor of String Theory

The Economist recently had an article (paywalled) that meandered through various developments in high-energy physics. It started out talking about the failure of the LHC to find SUSY, argued this looked bad for string theory (which…not really?) and used it as a jumping-off point to talk about various non-string “theories of everything”. Peter Woit quoted it a few posts back as kind of a bellwether for public opinion on supersymmetry and string theory.

The article was a muddle, but a fairly conventional muddle, explaining or mis-explaining things in roughly the same way as other popular physics pieces. For the most part that didn’t bug me, but one piece of the muddle hit a bit close to home:

The names of many of these [non-string theories of everything] do, it must be conceded, torture the English language. They include “causal dynamical triangulation”, “asymptotically safe gravity”, “loop quantum gravity” and the “amplituhedron formulation of quantum theory”.

I’ve posted about the amplituhedron more than a few times here on this blog. Out of every achievement of my sub-field, it has most captured the public imagination. It’s legitimately impressive, a way to translate calculations of probabilities of collisions of fundamental particles (in a toy model, to be clear) into geometrical objects. What it isn’t, and doesn’t pretend to be, is a theory of everything.

To be fair, the Economist piece admits this:

Most attempts at a theory of everything try to fit gravity, which Einstein describes geometrically, into quantum theory, which does not rely on geometry in this way. The amplituhedron approach does the opposite, by suggesting that quantum theory is actually deeply geometric after all. Better yet, the amplituhedron is not founded on notions of spacetime, or even statistical mechanics. Instead, these ideas emerge naturally from it. So, while the amplituhedron approach does not as yet offer a full theory of quantum gravity, it has opened up an intriguing path that may lead to one.

The reasoning they have leading up to it has a few misunderstandings anyway. The amplituhedron is geometrical, but in a completely different way from how Einstein’s theory of gravity is geometrical: Einstein’s gravity is a theory of space and time, the amplituhedron’s magic is that it hides space and time behind a seemingly more fundamental mathematics.

This is not to say that the amplituhedron won’t lead to insights about gravity. That’s a big part of what it’s for, in the long-term. Because the amplituhedron hides the role of space and time, it might show the way to theories that lack them altogether, theories where space and time are just an approximation for a more fundamental reality. That’s a real possibility, though not at this point a reality.

Even if you take this possibility completely seriously, though, there’s another problem with the Economist’s description: it’s not clear that this new theory would be a non-string theory!

The main people behind the amplituhedron are pretty positively disposed to string theory. If you asked them, I think they’d tell you that, rather than replacing string theory, they expect to learn more about string theory: to see how it could be reformulated in a way that yields insight about trickier problems. That’s not at all like the other “non-string theories of everything” in that list, which frame themselves as alternatives to, or even opponents of, string theory.

It is a lot like several other research programs, though, like ER=EPR and It from Qubit. Researchers in those programs try to use physical principles and toy models to say fundamental things about quantum gravity, trying to think about space and time as being made up of entangled quantum objects. By that logic, they belong in that list in the article alongside the amplituhedron. The reason they aren’t is obvious if you know where they come from: ER=EPR and It from Qubit are worked on by string theorists, including some of the most prominent ones.

The thing is, any reason to put the amplituhedron on that list is also a reason to put them. The amplituhedron is not a theory of everything, it is not at present a theory of quantum gravity. It’s a research direction that might shed new insight about quantum gravity. It doesn’t explicitly involve strings, but neither does It from Qubit most of the time. Unless you’re going to describe It from Qubit as a “non-string theory of everything”, you really shouldn’t describe the amplituhedron as one.

The amplituhedron is a really cool idea, one with great potential. It’s not something like loop quantum gravity, or causal dynamical triangulations, and it doesn’t need to be. Let it be what it is, please!

Amplitudes 2021 Retrospective

Phew!

The conference photo

Now that I’ve rested up after this year’s Amplitudes, I’ll give a few of my impressions.

Overall, I think the conference went pretty well. People seemed amused by the digital Niels Bohr, even if he looked a bit like a puppet (Lance compared him to Yoda in his final speech, which was…apt). We used Gather.town, originally just for the poster session and a “virtual reception”, but later we also encouraged people to meet up in it during breaks. That in particular was a big hit: I think people really liked the ability to just move around and chat in impromptu groups, and while nobody seemed to use the “virtual bar”, the “virtual beach” had a lively crowd. Time zones were inevitably rough, but I think we ended up with a good compromise where everyone could still see a meaningful chunk of the conference.

A few things didn’t work as well. For those planning conferences, I would strongly suggest not making a brand new gmail account to send out conference announcements: for a lot of people the emails went straight to spam. Zulip was a bust: I’m not sure if people found it more confusing than last year’s Slack or didn’t notice it due to the spam issue, but almost no-one posted in it. YouTube was complicated: the stream went down a few times and I could never figure out exactly why, it may have just been internet issues here at the Niels Bohr Institute (we did have a power outage one night and had to scramble to get internet access back the next morning). As far as I could tell YouTube wouldn’t let me re-open the previous stream so each time I had to post a new link, which probably was frustrating for those following along there.

That said, this was less of a problem than it might have been, because attendance/”viewership” as a whole was lower than expected. Zoomplitudes last year had massive numbers of people join in both on Zoom and via YouTube. We had a lot fewer: out of over 500 registered participants, we had fewer than 200 on Zoom at any one time, and at most 30 or so on YouTube. Confusion around the conference email might have played a role here, but I suspect part of the difference is simple fatigue: after over a year of this pandemic, online conferences no longer feel like an exciting new experience.

The actual content of the conference ranged pretty widely. Some people reviewed earlier work, others presented recent papers or even work-in-progress. As in recent years, a meaningful chunk of the conference focused on applications of amplitudes techniques to gravitational wave physics. This included a talk by Thibault Damour, who has by now mostly made his peace with the field after his early doubts were sorted out. He still suspected that the mismatch of scales (weak coupling on the one hand, classical scattering on the other) would cause problems in future, but after his work with Laporta and Mastrolia even he had to acknowledge that amplitudes techniques were useful.

In the past I would have put the double-copy and gravitational wave researchers under the same heading, but this year they were quite distinct. While a few of the gravitational wave talks mentioned the double-copy, most of those who brought it up were doing something quite a bit more abstract than gravitational wave physics. Indeed, several people were pushing the boundaries of what it means to double-copy. There were modified KLT kernels, different versions of color-kinematics duality, and explorations of what kinds of massive particles can and (arguably more interestingly) cannot be compatible with a double-copy framework. The sheer range of different generalizations had me briefly wondering whether the double-copy could be “too flexible to be meaningful”, whether the right definitions would let you double-copy anything out of anything. I was reassured by the points where each talk argued that certain things didn’t work: it suggests that wherever this mysterious structure comes from, its powers are limited enough to make it meaningful.

A fair number of talks dealt with what has always been our main application, collider physics. There the context shifted, but the message stayed consistent: for a “clean” enough process two or three-loop calculations can make a big difference, taking a prediction that would be completely off from experiment and bringing it into line. These are more useful the more that can be varied about the calculation: functions are more useful than numbers, for example. I was gratified to hear confirmation that a particular kind of process, where two massless particles like quarks become three massive particles like W or Z bosons, is one of these “clean enough” examples: it means someone will need to compute my “tardigrade” diagram eventually.

If collider physics is our main application, N=4 super Yang-Mills has always been our main toy model. Jaroslav Trnka gave us the details behind Nima’s exciting talk from last year, and Nima had a whole new exciting talk this year with promised connections to category theory (connections he didn’t quite reach after speaking for two and a half hours). Anastasia Volovich presented two distinct methods for predicting square-root symbol letters, while my colleague Chi Zhang showed some exciting progress with the elliptic double-box, realizing the several-year dream of representing it in a useful basis of integrals and showcasing several interesting properties. Anne Spiering came over from the integrability side to show us just how special the “planar” version of the theory really is: by increasing the number of colors of gluons, she showed that one could smoothly go between an “integrability-esque” spectrum and a “chaotic” spectrum. Finally, Lance Dixon mentioned his progress with form-factors in his talk at the end of the conference, showing off some statistics of coefficients of different functions and speculating that machine learning might be able to predict them.

On the more mathematical side, Francis Brown showed us a new way to get numbers out of graphs, one distinct but related to our usual interpretation in terms of Feynman diagrams. I’m still unsure what it will be used for, but the fact that it maps every graph to something finite probably has some interesting implications. Albrecht Klemm and Claude Duhr talked about two sides of the same story, their recent work on integrals involving Calabi-Yau manifolds. They focused on a particular nice set of integrals, and time will tell whether the methods work more broadly, but there are some exciting suggestions that at least parts will.

There’s been a resurgence of the old dream of the S-matrix community, constraining amplitudes via “general constraints” alone, and several talks dealt with those ideas. Sebastian Mizera went the other direction, and tried to test one of those “general constraints”, seeing under which circumstances he could prove that you can swap a particle going in with an antiparticle going out. Others went out to infinity, trying to understand amplitudes from the perspective of the so-called “celestial sphere” where they appear to be governed by conformal field theories of some sort. A few talks dealt with amplitudes in string theory itself: Yvonne Geyer built them out of field-theory amplitudes, while Ashoke Sen explained how to include D-instantons in them.

We also had three “special talks” in the evenings. I’ve mentioned Nima’s already. Zvi Bern gave a retrospective talk that I somewhat cheesily describe as “good for the soul”: a look to the early days of the field that reminded us of why we are who we are. Lance Dixon closed the conference with a light-hearted summary and a look to the future. That future includes next year’s Amplitudes, which after a hasty discussion during this year’s conference has now localized to Prague. Let’s hope it’s in person!

Busy Organizing Amplitudes 2021

I’m busy this week with Amplitudes 2021. Being behind the “organizer’s desk” for one of these conferences is an entirely different experience. There’s a lot to keep track of, keeping the Zoom going smoothly, the website up to date, and the YouTube stream running. Luckily we have good help, a team of students handling a lot of the more finicky details. I think we’ve been putting on a good conference, but there are definitely lessons I’ve learned for the next time I host something.

The content has been interesting too of course, and despite being busy I’ve still gotten to watch the talks. I’ll say more about this after the conference, there have been quite a few interesting developments in the past year.

Next Week, Amplitudes 2021!

I calculate things called scattering amplitudes, the building-blocks of predictions in particle physics. I’m part of a community of “amplitudeologists” that try to find better ways to compute these things, to achieve more efficiency and deeper understanding. We meet once a year for our big conference, called Amplitudes. And this year, I’m one of the organizers.

This year also happens to be the 100th anniversary of the founding of the Niels Bohr Institute, so we wanted to do something special. We found a group of artists working on a rendering of Niels Bohr. The original idea was to do one of those celebrity holograms, but after the conference went online we decided to make a few short clips instead. I wrote a Bohr-esque script, and we got help from one of Bohr’s descendants to get the voice just-so. Now, you can see the result, as our digital Bohr invites you to the conference.

We’ll be livestreaming the conference on the same YouTube channel, and posting videos of the talks each day. If you’re curious about the latest developments in scattering amplitudes, I encourage you to tune in. And if you’re an amplitudeologist yourself, registration is still open!

Reality as an Algebra of Observables

Listen to a physicist talk about quantum mechanics, and you’ll hear the word “observable”. Observables are, intuitively enough, things that can be observed. They’re properties that, in principle, one could measure in an experiment, like the position of a particle or its momentum. They’re the kinds of things linked by uncertainty principles, where the better you know one, the worse you know the other.

Some physicists get frustrated by this focus on measurements alone. They think we ought to treat quantum mechanics, not like a black box that produces results, but as information about some underlying reality. Instead of just observables, they want us to look for “beables“: not just things that can be observed, but things that something can be. From their perspective, the way other physicists focus on observables feels like giving up, like those physicists are abandoning their sacred duty to understand the world. Others, like the Quantum Bayesians or QBists, disagree, arguing that quantum mechanics really is, and ought to be, a theory of how individuals get evidence about the world.

I’m not really going to weigh in on that debate, I still don’t feel like I know enough to even write a decent summary. But I do think that one of the instincts on the “beables” side is wrong. If we focus on observables in quantum mechanics, I don’t think we’re doing anything all that unusual. Even in other parts of physics, we can think about reality purely in terms of observations. Doing so isn’t a dereliction of duty: often, it’s the most useful way to understand the world.

When we try to comprehend the world, we always start alone. From our time in the womb, we have only our senses and emotions to go on. With a combination of instinct and inference we start assembling a consistent picture of reality. Philosophers called phenomenologists (not to be confused with the physicists called phenomenologists) study this process in detail, trying to characterize how different things present themselves to an individual consciousness.

For my point here, these details don’t matter so much. That’s because in practice, we aren’t alone in understanding the world. Based on what others say about the world, we conclude they perceive much like we do, and we learn by their observations just as we learn by our own. We can make things abstract: instead of the specifics of how individuals perceive, we think about groups of scientists making measurements. At the end of this train lie observables: things that we as a community could in principle learn, and share with each other, ignoring the details of how exactly we measure them.

If each of these observables was unrelated, just scattered points of data, then we couldn’t learn much. Luckily, they are related. In quantum mechanics, some of these relationships are the uncertainty principles I mentioned earlier. Others relate measurements at different places, or at different times. The fancy way to refer to all these relationships is as an algebra: loosely, it’s something you can “do algebra with”, like you did with numbers and variables in high school. When physicists and mathematicians want to do quantum mechanics or quantum field theory seriously, they often talk about an “algebra of observables”, a formal way of thinking about all of these relationships.

Focusing on those two things, observables and how they are related, isn’t just useful in the quantum world. It’s an important way to think in other areas of physics too. If you’ve heard people talk about relativity, the focus on measurement screams out, in thought experiments full of abstract clocks and abstract yardsticks. Without this discipline, you find paradoxes, only to resolve them when you carefully track what each person can observe. More recently, physicists in my field have had success computing the chance particles collide by focusing on the end result, the actual measurements people can make, ignoring what might happen in between to cause that measurement. We can then break measurements down into simpler measurements, or use the structure of simpler measurements to guess more complicated ones. While we typically have done this in quantum theories, that’s not really a limitation: the same techniques make sense for problems in classical physics, like computing the gravitational waves emitted by colliding black holes.

With this in mind, we really can think of reality in those terms: not as a set of beable objects, but as a set of observable facts, linked together in an algebra of observables. Paring things down to what we can know in this way is more honest, and it’s also more powerful and useful. Far from a betrayal of physics, it’s the best advantage we physicists have in our quest to understand the world.

A Tale of Two Donuts

I’ve got a new paper up this week, with Hjalte Frellesvig, Cristian Vergu, and Matthias Volk, about the elliptic integrals that show up in Feynman diagrams.

You can think of elliptic integrals as integrals over a torus, a curve shaped like the outer crust of a donut.

Do you prefer your integrals glazed, or with powdered sugar?

Integrals like these are showing up more and more in our field, the subject of bigger and bigger conferences. By now, we think we have a pretty good idea of how to handle them, but there are still some outstanding mysteries to solve.

One such mystery came up in a paper in 2017, by Luise Adams and Stefan Weinzierl. They were working with one of the favorite examples of this community, the so-called sunrise diagram (sunrise being a good time to eat donuts). And they noticed something surprising: if they looked at the sunrise diagram in different ways, it was described by different donuts.

What do I mean, different donuts?

The integrals we know best in this field aren’t integrals on a torus, but rather integrals on a sphere. In some sense, all spheres are the same: you can make them bigger or smaller, but they don’t have different shapes, they’re all “sphere-shaped”. In contrast, integrals on a torus are trickier, because toruses can have different shapes. Think about different donuts: some might have a thin ring, others a thicker one, even if the overall donut is the same size. You can’t just scale up one donut and get the other.

This donut even has a marked point

My colleague, Cristian Vergu, was annoyed by this. He’s the kind of person who trusts mathematics like an old friend, one who would never lead him astray. He thought that there must be one answer, one correct donut, one natural way to represent the sunrise diagram mathematically. I was skeptical, I don’t trust mathematics nearly as much as Cristian does. To sort it out, we brought in Hjalte Frellesvig and Matthias Volk, and started trying to write the sunrise diagram every way we possibly could. (Along the way, we threw in another “donut diagram”, the double-box, just to see what would happen.)

Rather than getting a zoo of different donuts, we got a surprise: we kept seeing the same two. And in the end, we stumbled upon the answer Cristian was hoping for: one of these two is, in a meaningful sense, the “correct donut”.

What was wrong with the other donut? It turns out when the original two donuts were found, one of them involved a move that is a bit risky mathematically, namely, combining square roots.

For readers who don’t know what I mean, or why this is risky, let me give a simple example. Everyone else can skip to after the torus gif.

Suppose I am solving a problem, and I find a product of two square roots:

\sqrt{x}\sqrt{x}

I could try combining them under the same square root sign, like so:

\sqrt{x^2}

That works, if x is positive. But now suppose x=-1. Plug in negative one to the first expression, and you get,

\sqrt{-1}\sqrt{-1}=i\times i=-1

while in the second,

\sqrt{(-1)^2}=\sqrt{1}=1

Torus transforming, please stand by

In this case, it wasn’t as obvious that combining roots would change the donut. It might have been perfectly safe. It took some work to show that indeed, this was the root of the problem. If the roots are instead combined more carefully, then one of the donuts goes away, leaving only the one, true donut.

I’m interested in seeing where this goes, how many different donuts we have to understand and how they might be related. But I’ve also been writing about donuts for the last hour or so, so I’m getting hungry. See you next week!

This Week, at Scattering-Amplitudes.com

I did a guest post this week, on an outreach site for the Max Planck Institute for Physics. The new Director of their Quantum Field Theory Department, Johannes Henn, has been behind a lot of major developments in scattering amplitudes. He was one of the first to notice just how symmetric N=4 super Yang-Mills is, as well as the first to build the “hexagon functions” that would become my stock-in-trade. He’s also done what we all strive to do, and applied what he learned to the real world, coming up with an approach to differential equations that has become the gold standard for many different amplitudes calculations.

Now in his new position, he has a swanky new outreach site, reached at the conveniently memorable scattering-amplitudes.com and managed by outreach-ologist Sorana Scholtes. They started a fun series recently called “Talking Terms” as a kind of glossary, explaining words that physicists use over and over again. My guest post for them is part of that series. It hearkens all the way back to one of my first posts, defining what “theory” means to a theoretical physicist. It covers something new as well, a phrase I don’t think I’ve ever explained on this blog: “working in a theory”. You can check it out on their site!

A Physicist New Year

Happy New Year to all!

Physicists celebrate the new year by trying to sneak one last paper in before the year is over. Looking at Facebook last night I saw three different friends preview the papers they just submitted. The site where these papers appear, arXiv, had seventy new papers this morning, just in the category of theoretical high-energy physics. Of those, nine of them were in my, or a closely related subfield.

I’d love to tell you all about these papers (some exciting! some long-awaited!), but I’m still tired from last night and haven’t read them yet. So I’ll just close by wishing you all, once again, a happy new year.

QCD Meets Gravity 2020, Retrospective

I was at a Zoomference last week, called QCD Meets Gravity, about the many ways gravity can be thought of as the “square” of other fundamental forces. I didn’t have time to write much about the actual content of the conference, so I figured I’d say a bit more this week.

A big theme of this conference, as in the past few years, was gravitational waves. From LIGO’s first announcement of a successful detection, amplitudeologists have been developing new methods to make predictions for gravitational waves more efficient. It’s a field I’ve dabbled in a bit myself. Last year’s QCD Meets Gravity left me impressed by how much progress had been made, with amplitudeologists already solidly part of the conversation and able to produce competitive results. This year felt like another milestone, in that the amplitudeologists weren’t just catching up with other gravitational wave researchers on the same kinds of problems. Instead, they found new questions that amplitudes are especially well-suited to answer. These included combining two pieces of these calculations (“potential” and “radiation”) that the older community typically has to calculate separately, using an old quantum field theory trick, finding the gravitational wave directly from amplitudes, and finding a few nice calculations that can be used to “generate” the rest.

A large chunk of the talks focused on different “squaring” tricks (or as we actually call them, double-copies). There were double-copies for cosmology and conformal field theory, for the celestial sphere, and even some version of M theory. There were new perspectives on the double-copy, new building blocks and algebraic structures that lie behind it. There were talks on the so-called classical double-copy for space-times, where there have been some strange discoveries (an extra dimension made an appearance) but also a more rigorous picture of where the whole thing comes from, using twistor space. There were not one, but two talks linking the double-copy to the Navier-Stokes equation describing fluids, from two different groups. (I’m really curious whether these perspectives are actually useful for practical calculations about fluids, or just fun to think about.) Finally, while there wasn’t a talk scheduled on this paper, the authors were roped in by popular demand to talk about their work. They claim to have made progress on a longstanding puzzle, how to show that double-copy works at the level of the Lagrangian, and the community was eager to dig into the details.

From there, a grab-bag of talks covered other advancements. There were talks from string theorists and ambitwistor string theorists, from Effective Field Theorists working on gravity and the Standard Model, from calculations in N=4 super Yang-Mills, QCD, and scalar theories. Simon Caron-Huot delved into how causality constrains the theories we can write down, showing an interesting case where the common assumption that all parameters are close to one is actually justified. Nima Arkani-Hamed began his talk by saying he’d surprise us, which he certainly did (and not by keeping on time). It’s tricky to explain why his talk was exciting. Comparing to his earlier discovery of the Amplituhedron, which worked for a toy model, this is a toy calculation in a toy model. While the Amplituhedron wasn’t based on Feynman diagrams, this can’t even be compared with Feynman diagrams. Instead of expanding in a small coupling constant, this expands in a parameter that by all rights should be equal to one. And instead of positivity conditions, there are negativity conditions. All I can say is that with all of that in mind, it looks like real progress on an important and difficult problem from a totally unanticipated direction. In a speech summing up the conference, Zvi Bern mentioned a few exciting words from Nima’s talk: “nonplanar”, “integrated”, “nonperturbative”. I’d add “differential equations” and “infinite sums of ladder diagrams”. Nima and collaborators are trying to figure out what happens when you sum up all of the Feynman diagrams in a theory. I’ve made progress in the past for diagrams with one “direction”, a ladder that grows as you add more loops, but I didn’t know how to add “another direction” to the ladder. In very rough terms, Nima and collaborators figured out how to add that direction.

I’ve probably left things out here, it was a packed conference! It’s been really fun seeing what the community has cooked up, and I can’t wait to see what happens next.