# Reality as an Algebra of Observables

Listen to a physicist talk about quantum mechanics, and you’ll hear the word “observable”. Observables are, intuitively enough, things that can be observed. They’re properties that, in principle, one could measure in an experiment, like the position of a particle or its momentum. They’re the kinds of things linked by uncertainty principles, where the better you know one, the worse you know the other.

Some physicists get frustrated by this focus on measurements alone. They think we ought to treat quantum mechanics, not like a black box that produces results, but as information about some underlying reality. Instead of just observables, they want us to look for “beables“: not just things that can be observed, but things that something can be. From their perspective, the way other physicists focus on observables feels like giving up, like those physicists are abandoning their sacred duty to understand the world. Others, like the Quantum Bayesians or QBists, disagree, arguing that quantum mechanics really is, and ought to be, a theory of how individuals get evidence about the world.

I’m not really going to weigh in on that debate, I still don’t feel like I know enough to even write a decent summary. But I do think that one of the instincts on the “beables” side is wrong. If we focus on observables in quantum mechanics, I don’t think we’re doing anything all that unusual. Even in other parts of physics, we can think about reality purely in terms of observations. Doing so isn’t a dereliction of duty: often, it’s the most useful way to understand the world.

When we try to comprehend the world, we always start alone. From our time in the womb, we have only our senses and emotions to go on. With a combination of instinct and inference we start assembling a consistent picture of reality. Philosophers called phenomenologists (not to be confused with the physicists called phenomenologists) study this process in detail, trying to characterize how different things present themselves to an individual consciousness.

For my point here, these details don’t matter so much. That’s because in practice, we aren’t alone in understanding the world. Based on what others say about the world, we conclude they perceive much like we do, and we learn by their observations just as we learn by our own. We can make things abstract: instead of the specifics of how individuals perceive, we think about groups of scientists making measurements. At the end of this train lie observables: things that we as a community could in principle learn, and share with each other, ignoring the details of how exactly we measure them.

If each of these observables was unrelated, just scattered points of data, then we couldn’t learn much. Luckily, they are related. In quantum mechanics, some of these relationships are the uncertainty principles I mentioned earlier. Others relate measurements at different places, or at different times. The fancy way to refer to all these relationships is as an algebra: loosely, it’s something you can “do algebra with”, like you did with numbers and variables in high school. When physicists and mathematicians want to do quantum mechanics or quantum field theory seriously, they often talk about an “algebra of observables”, a formal way of thinking about all of these relationships.

Focusing on those two things, observables and how they are related, isn’t just useful in the quantum world. It’s an important way to think in other areas of physics too. If you’ve heard people talk about relativity, the focus on measurement screams out, in thought experiments full of abstract clocks and abstract yardsticks. Without this discipline, you find paradoxes, only to resolve them when you carefully track what each person can observe. More recently, physicists in my field have had success computing the chance particles collide by focusing on the end result, the actual measurements people can make, ignoring what might happen in between to cause that measurement. We can then break measurements down into simpler measurements, or use the structure of simpler measurements to guess more complicated ones. While we typically have done this in quantum theories, that’s not really a limitation: the same techniques make sense for problems in classical physics, like computing the gravitational waves emitted by colliding black holes.

With this in mind, we really can think of reality in those terms: not as a set of beable objects, but as a set of observable facts, linked together in an algebra of observables. Paring things down to what we can know in this way is more honest, and it’s also more powerful and useful. Far from a betrayal of physics, it’s the best advantage we physicists have in our quest to understand the world.

# A Tale of Two Donuts

I’ve got a new paper up this week, with Hjalte Frellesvig, Cristian Vergu, and Matthias Volk, about the elliptic integrals that show up in Feynman diagrams.

You can think of elliptic integrals as integrals over a torus, a curve shaped like the outer crust of a donut.

Integrals like these are showing up more and more in our field, the subject of bigger and bigger conferences. By now, we think we have a pretty good idea of how to handle them, but there are still some outstanding mysteries to solve.

One such mystery came up in a paper in 2017, by Luise Adams and Stefan Weinzierl. They were working with one of the favorite examples of this community, the so-called sunrise diagram (sunrise being a good time to eat donuts). And they noticed something surprising: if they looked at the sunrise diagram in different ways, it was described by different donuts.

What do I mean, different donuts?

The integrals we know best in this field aren’t integrals on a torus, but rather integrals on a sphere. In some sense, all spheres are the same: you can make them bigger or smaller, but they don’t have different shapes, they’re all “sphere-shaped”. In contrast, integrals on a torus are trickier, because toruses can have different shapes. Think about different donuts: some might have a thin ring, others a thicker one, even if the overall donut is the same size. You can’t just scale up one donut and get the other.

My colleague, Cristian Vergu, was annoyed by this. He’s the kind of person who trusts mathematics like an old friend, one who would never lead him astray. He thought that there must be one answer, one correct donut, one natural way to represent the sunrise diagram mathematically. I was skeptical, I don’t trust mathematics nearly as much as Cristian does. To sort it out, we brought in Hjalte Frellesvig and Matthias Volk, and started trying to write the sunrise diagram every way we possibly could. (Along the way, we threw in another “donut diagram”, the double-box, just to see what would happen.)

Rather than getting a zoo of different donuts, we got a surprise: we kept seeing the same two. And in the end, we stumbled upon the answer Cristian was hoping for: one of these two is, in a meaningful sense, the “correct donut”.

What was wrong with the other donut? It turns out when the original two donuts were found, one of them involved a move that is a bit risky mathematically, namely, combining square roots.

For readers who don’t know what I mean, or why this is risky, let me give a simple example. Everyone else can skip to after the torus gif.

Suppose I am solving a problem, and I find a product of two square roots:

$\sqrt{x}\sqrt{x}$

I could try combining them under the same square root sign, like so:

$\sqrt{x^2}$

That works, if $x$ is positive. But now suppose $x=-1$. Plug in negative one to the first expression, and you get,

$\sqrt{-1}\sqrt{-1}=i\times i=-1$

while in the second,

$\sqrt{(-1)^2}=\sqrt{1}=1$

In this case, it wasn’t as obvious that combining roots would change the donut. It might have been perfectly safe. It took some work to show that indeed, this was the root of the problem. If the roots are instead combined more carefully, then one of the donuts goes away, leaving only the one, true donut.

I’m interested in seeing where this goes, how many different donuts we have to understand and how they might be related. But I’ve also been writing about donuts for the last hour or so, so I’m getting hungry. See you next week!

# Physical Intuition From Physics Experience

One of the most mysterious powers physicists claim is physical intuition. Let the mathematicians have their rigorous proofs and careful calculations. We just need to ask ourselves, “Does this make sense physically?”

It’s tempting to chalk this up to bluster, or physicist arrogance. Sometimes, though, a physicist manages to figure out something that stumps the mathematicians. Edward Witten’s work on knot theory is a classic example, where he used ideas from physics, not rigorous proof, to win one of mathematics’ highest honors.

So what is physical intuition? And what is its relationship to proof?

Let me walk you through an example. I recently saw a talk by someone in my field who might be a master of physical intuition. He was trying to learn about what we call Effective Field Theories, theories that are “effectively” true at some energy but don’t include the details of higher-energy particles. He calculated that there are limits to the effect these higher-energy particles can have, just based on simple cause and effect. To explain the calculation to us, he gave a physical example, of coupled oscillators.

Oscillators are familiar problems for first-year physics students. Objects that go back and forth, like springs and pendulums, tend to obey similar equations. Link two of them together (couple them), and the equations get more complicated, work for a second-year student instead of a first-year one. Such a student will notice that coupled oscillators “repel” each other: their frequencies get father apart than they would be if they weren’t coupled.

Our seminar speaker wanted us to revisit those second-year-student days, in order to understand how different particles behave in Effective Field Theory. Just as the frequencies of the oscillators repel each other, the energies of particles repel each other: the unknown high-energy particles could only push the energies of the lighter particles we can detect lower, not higher.

This is an example of physical intuition. Examine it, and you can learn a few things about how physical intuition works.

First, physical intuition comes from experience. Using physical intuition wasn’t just a matter of imagining the particles and trying to see what “makes sense”. Instead, it required thinking about similar problems from our experience as physicists: problems that don’t just seem similar on the surface, but are mathematically similar.

Finally, physical intuition can be risky. If the problem is too different then the intuition can lead you astray. The mathematics of coupled oscillators and Effective Field Theories was similar enough for this argument to work, but if it turned out to be different in an important way then the intuition would have backfired, making it harder to find the answer and harder to keep track once it was found.

Physical intuition may seem mysterious. But deep down, it’s just physicists using our experience, comparing similar problems to help keep track of what we need to know. I’m sure chemists, biologists, and mathematicians all have similar stories to tell.

# Newtonmas in Uncertain Times

Three hundred and eighty-two years ago today (depending on which calendars you use), Isaac Newton was born. For a scientist, that’s a pretty good reason to celebrate.

Last month, our local nest of science historians at the Niels Bohr Archive hosted a Zoom talk by Jed Z. Buchwald, a Newton scholar at Caltech. Buchwald had a story to tell about experimental uncertainty, one where Newton had an important role.

If you’ve ever had a lab course in school, you know experiments never quite go like they’re supposed to. Set a room of twenty students to find Newton’s constant, and you’ll get forty different answers. Whether you’re reading a ruler or clicking a stopwatch, you can never measure anything with perfect accuracy. Each time you measure, you introduce a little random error.

Textbooks worth of statistical know-how has cropped up over the centuries to compensate for this error and get closer to the truth. The simplest trick though, is just to average over multiple experiments. It’s so obvious a choice, taking a thousand little errors and smoothing them out, that you might think people have been averaging in this way through history.

They haven’t though. As far as Buchwald had found, the first person to average experiments in this way was Isaac Newton.

What did people do before Newton?

Well, what might you do, if you didn’t have a concept of random error? You can still see that each time you measure you get a different result. But you would blame yourself: if you were more careful with the ruler, quicker with the stopwatch, you’d get it right. So you practice, you do the experiment many times, just as you would if you were averaging. But instead of averaging, you just take one result, the one you feel you did carefully enough to count.

Before Newton, this was almost always what scientists did. If you were an astronomer mapping the stars, the positions you published would be the last of a long line of measurements, not an average of the rest. Some other tricks existed. Tycho Brahe for example folded numbers together pair by pair, averaging the first two and then averaging that average with the next one, getting a final result weighted to the later measurements. But, according to Buchwald, Newton was the first to just add everything together.

Even Newton didn’t yet know why this worked. It would take later research, theorems of statistics, to establish the full justification. It seems Newton and his later contemporaries had a vague physics analogy in mind, finding a sort of “center of mass” of different experiments. This doesn’t make much sense – but it worked, well enough for physics as we know it to begin.

So this Newtonmas, let’s thank the scientists of the past. Working piece by piece, concept by concept, they gave use the tools to navigate our uncertain times.

# Congratulations to Roger Penrose, Reinhard Genzel, and Andrea Ghez!

The 2020 Physics Nobel Prize was announced last week, awarded to Roger Penrose for his theorems about black holes and Reinhard Genzel and Andrea Ghez for discovering the black hole at the center of our galaxy.

Of the three, I’m most familiar with Penrose’s work. People had studied black holes before Penrose, but only the simplest of situations, like an imaginary perfectly spherical star. Some wondered whether black holes in nature were limited in this way, if they could only exist under perfectly balanced conditions. Penrose showed that wasn’t true: he proved mathematically that black holes not only can form, they must form, in very general situations. He’s also worked on a wide variety of other things. He came up with “twistor space”, an idea intended for a new theory of quantum gravity that ended up as a useful tool for “amplitudeologists” like me to study particle physics. He discovered a set of four types of tiles such that if you tiled a floor with them the pattern would never repeat. And he has some controversial hypotheses about quantum gravity and consciousness.

I’m less familiar with Genzel and Ghez, but by now everyone should be familiar with what they found. Genzel and Ghez led two teams that peered into the center of our galaxy. By carefully measuring the way stars moved deep in the core, they figured out something we now teach children: that our beloved Milky Way has a dark and chewy center, an enormous black hole around which everything else revolves. These appear to be a common feature of galaxies, and many others have been shown to orbit black holes as well.

Like last year, I find it a bit odd that the Nobel committee decided to lump these two prizes together. Both discoveries concern black holes, so they’re more related than last year’s laureates, but the contexts are quite different: it’s not as if Penrose predicted the black hole in the center of our galaxy. Usually the Nobel committee avoids mathematical work like Penrose’s, except when it’s tied to a particular experimental discovery. It doesn’t look like anyone has gotten a Nobel prize for discovering that black holes exist, so maybe that’s the intent of this one…but Genzel and Ghez were not the first people to find evidence of a black hole. So overall I’m confused. I’d say that Penrose deserved a Nobel Prize, and that Genzel and Ghez did as well, but I’m not sure why they needed to split one with each other.

# At “Antidifferentiation and the Calculation of Feynman Amplitudes”

I was at a conference this week, called Antidifferentiation and the Calculation of Feynman Amplitudes. The conference is a hybrid kind of affair: I attended via Zoom, but there were seven or so people actually there in the room (the room in question being at DESY Zeuthen, near Berlin).

The road to this conference was a bit of a roller-coaster. It was originally scheduled for early March. When the organizers told us they were postponing it, they seemed at the time a little overcautious…until the world proved me, and all of us, wrong. They rescheduled for October, and as more European countries got their infection rates down it looked like the conference could actually happen. We booked rooms at the DESY guest house, until it turned out they needed the space to keep the DESY staff socially distanced, and we quickly switched to booking at a nearby hotel.

Then Europe’s second wave hit. Cases in Denmark started to rise, so Germany imposed a quarantine on entry from Copenhagen and I switched to remote participation. Most of the rest of the participants did too, even several in Germany. For the few still there in person they have a variety of measures to stop infection, from fixed seats in the conference room to gloves for the coffee machine.

The content has been interesting. It’s an eclectic mix of review talks and talks on recent research, all focused on different ways to integrate (or, as one of the organizers emphasized, antidifferentiate) functions in quantum field theory. I’ve learned about the history of the field, and gotten a better feeling for the bottlenecks in some LHC-relevant calculations.

This week was also the announcement of the Physics Nobel Prize. I’ll do my traditional post on it next week, but for now, congratulations to Penrose, Genzel, and Ghez!

# Which Things Exist in Quantum Field Theory

If you ever think metaphysics is easy, learn a little quantum field theory.

Someone asked me recently about virtual particles. When talking to the public, physicists sometimes explain the behavior of quantum fields with what they call “virtual particles”. They’ll describe forces coming from virtual particles going back and forth, or a bubbling sea of virtual particles and anti-particles popping out of empty space.

The thing is, this is a metaphor. What’s more, it’s a metaphor for an approximation. As physicists, when we draw diagrams with more and more virtual particles, we’re trying to use something we know how to calculate with (particles) to understand something tougher to handle (interacting quantum fields). Virtual particles, at least as you’re probably picturing them, don’t really exist.

I don’t really blame physicists for talking like that, though. Virtual particles are a metaphor, sure, a way to talk about a particular calculation. But so is basically anything we can say about quantum field theory. In quantum field theory, it’s pretty tough to say which things “really exist”.

You might have heard that there are three types of neutrinos, corresponding to the three “generations” of the Standard Model: electron-neutrinos, muon-neutrinos, and tau-neutrinos. Each is produced in particular kinds of reactions: electron-neutrinos, for example, get produced by beta-plus decay, when a proton turns into a neutron, an anti-electron, and an electron-neutrino.

Leave these neutrinos alone though, and something strange happens. Detect what you expect to be an electron-neutrino, and it might have changed into a muon-neutrino or a tau-neutrino. The neutrino oscillated.

Why does this happen?

One way to explain it is to say that electron-neutrinos, muon-neutrinos, and tau-neutrinos don’t “really exist”. Instead, what really exists are neutrinos with specific masses. These don’t have catchy names, so let’s just call them neutrino-one, neutrino-two, and neutrino-three. What we think of as electron-neutrinos, muon-neutrinos, and tau-neutrinos are each some mix (a quantum superposition) of these “really existing” neutrinos, specifically the mixes that interact nicely with electrons, muons, and tau leptons respectively. When you let them travel, it’s these neutrinos that do the traveling, and due to quantum effects that I’m not explaining here you end up with a different mix than you started with.

This probably seems like a perfectly reasonable explanation. But it shouldn’t. Because if you take one of these mass-neutrinos, and interact with an electron, or a muon, or a tau, then suddenly it behaves like a mix of the old electron-neutrinos, muon-neutrinos, and tau-neutrinos.

That’s because both explanations are trying to chop the world up in a way that can’t be done consistently. There aren’t electron-neutrinos, muon-neutrinos, and tau-neutrinos, and there aren’t neutrino-ones, neutrino-twos, and neutrino-threes. There’s a mathematical object (a vector space) that can look like either.

Whether you’re comfortable with that depends on whether you think of mathematical objects as “things that exist”. If you aren’t, you’re going to have trouble thinking about the quantum world. Maybe you want to take a step back, and say that at least “fields” should exist. But that still won’t do: we can redefine fields, add them together or even use more complicated functions, and still get the same physics. The kinds of things that exist can’t be like this. Instead you end up invoking another kind of mathematical object, equivalence classes.

If you want to be totally rigorous, you have to go a step further. You end up thinking of physics in a very bare-bones way, as the set of all observations you could perform. Instead of describing the world in terms of “these things” or “those things”, the world is a black box, and all you’re doing is finding patterns in that black box.

Is there a way around this? Maybe. But it requires thought, and serious philosophy. It’s not intuitive, it’s not easy, and it doesn’t lend itself well to 3d animations in documentaries. So in practice, whenever anyone tells you about something in physics, you can be pretty sure it’s a metaphor. Nice describable, non-mathematical things typically don’t exist.

# To Elliptics and Beyond!

I’ve been busy running a conference this week, Elliptics and Beyond.

After Amplitudes was held online this year, a few of us at the Niels Bohr Institute were inspired. We thought this would be the perfect time to hold a small online conference, focused on the Calabi-Yaus that have been popping up lately in Feynman diagrams. Then we heard from the organizers of Elliptics 2020. They had been planning to hold a conference in Mainz about elliptic integrals in Feynman diagrams, but had to postpone it due to the pandemic. We decided to team up and hold a joint conference on both topics: the elliptic integrals that are just starting to be understood, and the mysterious integrals that lie beyond. Hence, Elliptics and Beyond.

The conference has been fun thus far. There’s been a mix of review material bringing people up to speed on elliptic integrals and exciting new developments. Some are taking methods that have been successful in other areas and generalizing them to elliptic integrals, others have been honing techniques for elliptics to make them “production-ready”. A few are looking ahead even further, to higher-genus amplitudes in string theory and Calabi-Yaus in Feynman diagrams.

We organized the conference along similar lines to Zoomplitudes, but with a few experiments of our own. Like Zoomplitudes, we made a Slack space for the conference, so people could chat physics outside the talks. Ours was less active, though. I suspect that kind of space needs a critical mass of people, and with a smaller conference we may just not have gotten there. Having fewer people did allow us a more relaxed schedule, which in turn meant we could mostly keep things on-time. We had discussion sessions in the morning (European time), with talks in the afternoon, so almost everyone could make the talks at least. We also had a “conference dinner”, which went much better than I would have expected. We put people randomly into Zoom Breakout Rooms of five or six, to emulate the tables of an in-person conference, and folks chatted while eating their (self-brought of course) dinner. People seemed to really enjoy the chance to just chat casually with the other folks at the conference. If you’re organizing an online conference soon, I’d recommend trying it!

Holding a conference online means that a lot of people can attend who otherwise couldn’t. We had over a hundred people register, and while not all of them showed up there were typically fifty or sixty people on the Zoom session. Some of these were specialists in elliptics or Calabi-Yaus who wouldn’t ordinarily make it to a conference like this. Others were people from the rest of the amplitudes field who joined for parts of the conference that caught their eye. But surprisingly many weren’t even amplitudeologists, but students and young researchers in a variety of topics from all over the world. Some seemed curious and eager to learn, others I suspect just needed to say they had been to a conference. Both are responding to a situation where suddenly conference after conference is available online, free to join. It will be interesting to see if, and how, the world adapts.

# Zoomplitudes Retrospective

During Zoomplitudes (my field’s big yearly conference, this year on Zoom) I didn’t have time to write a long blog post. I said a bit about the format, but didn’t get a chance to talk about the science. I figured this week I’d go back and give a few more of my impressions. As always, conference posts are a bit more technical than my usual posts, so regulars be warned!

The conference opened with a talk by Gavin Salam, there as an ambassador for LHC physics. Salam pointed out that, while a decent proportion of speakers at Amplitudes mention the LHC in their papers, that fraction has fallen over the years. (Another speaker jokingly wondered which of those mentions were just in the paper’s introduction.) He argued that there is still useful work for us, LHC measurements that will require serious amplitudes calculations to understand. He also brought up what seems like the most credible argument for a new, higher-energy collider: that there are important properties of the Higgs, in particular its interactions, that we still have not observed.

The next few talks hopefully warmed Salam’s heart, as they featured calculations for real-world particle physics. Nathaniel Craig and Yael Shadmi in particular covered the link between amplitudes and Standard Model Effective Field Theory (SMEFT), a method to systematically characterize corrections beyond the Standard Model. Shadmi’s talk struck me because the kind of work she described (building the SMEFT “amplitudes-style”, directly from observable information rather than more complicated proxies) is something I’d seen people speculate about for a while, but which hadn’t been done until quite recently. Now, several groups have managed it, and look like they’ve gotten essentially “all the way there”, rather than just partial results that only manage to replicate part of the SMEFT. Overall it’s much faster progress than I would have expected.

After Shadmi’s talk was a brace of talks on N=4 super Yang-Mills, featuring cosmic Galois theory and an impressively groan-worthy “origin story” joke. The final talk of the day, by Hofie Hannesdottir, covered work with some of my colleagues at the NBI. Due to coronavirus I hadn’t gotten to hear about this in person, so it was good to hear a talk on it, a blend of old methods and new priorities to better understand some old discoveries.

The next day focused on a topic that has grown in importance in our community, calculations for gravitational wave telescopes like LIGO. Several speakers focused on new methods for collisions of spinning objects, where a few different approaches are making good progress (Radu Roiban’s proposal to use higher-spin field theory was particularly interesting) but things still aren’t quite “production-ready”. The older, post-Newtonian method is still very much production-ready, as evidenced by Michele Levi’s talk that covered, among other topics, our recent collaboration. Julio Parra-Martinez discussed some interesting behavior shared by both supersymmetric and non-supersymmetric gravity theories. Thibault Damour had previously expressed doubts about use of amplitudes methods to answer this kind of question, and part of Parra-Martinez’s aim was to confirm the calculation with methods Damour would consider more reliable. Damour (who was actually in the audience, which I suspect would not have happened at an in-person conference) had already recanted some related doubts, but it’s not clear to me whether that extended to the results Parra-Martinez discussed (or whether Damour has stated the problem with his old analysis).

There were a few talks that day that didn’t relate to gravitational waves, though this might have been an accident, since both speakers also work on that topic. Zvi Bern’s talk linked to the previous day’s SMEFT discussion, with a calculation using amplitudes methods of direct relevance to SMEFT researchers. Clifford Cheung’s talk proposed a rather strange/fun idea, conformal symmetry in negative dimensions!

Wednesday was “amplituhedron day”, with a variety of talks on positive geometries and cluster algebras. Featured in several talks was “tropicalization“, a mathematical procedure that can simplify complicated geometries while still preserving essential features. Here, it was used to trim down infinite “alphabets” conjectured for some calculations into a finite set, and in doing so understand the origin of “square root letters”. The day ended with a talk by Nima Arkani-Hamed, who despite offering to bet that he could finish his talk within the half-hour slot took almost twice that. The organizers seemed to have planned for this, since there was one fewer talk that day, and as such the day ended at roughly the usual time regardless.

For lack of a better name, I’ll call Thursday’s theme “celestial”. The day included talks by cosmologists (including approaches using amplitudes-ish methods from Daniel Baumann and Charlotte Sleight, and a curiously un-amplitudes-related talk from Daniel Green), talks on “celestial amplitudes” (amplitudes viewed from the surface of an infinitely distant sphere), and various talks with some link to string theory. I’m including in that last category intersection theory, which has really become its own thing. This included a talk by Simon Caron-Huot about using intersection theory more directly in understanding Feynman integrals, and a talk by Sebastian Mizera using intersection theory to investigate how gravity is Yang-Mills squared. Both gave me a much better idea of the speakers’ goals. In Mizera’s case he’s aiming for something very ambitious. He wants to use intersection theory to figure out when and how one can “double-copy” theories, and might figure out why the procedure “got stuck” at five loops. The day ended with a talk by Pedro Vieira, who gave an extremely lucid and well-presented “blackboard-style” talk on bootstrapping amplitudes.

Friday was a grab-bag of topics. Samuel Abreu discussed an interesting calculation using the numerical unitarity method. It was notable in part because renormalization played a bigger role than it does in most amplitudes work, and in part because they now have a cool logo for their group’s software, Caravel. Claude Duhr and Ruth Britto gave a two-part talk on their work on a Feynman integral coaction. I’d had doubts about the diagrammatic coaction they had worked on in the past because it felt a bit ad-hoc. Now, they’re using intersection theory, and have a clean story that seems to tie everything together. Andrew McLeod talked about our work on a Feynman diagram Calabi-Yau “bestiary”, while Cristian Vergu had a more rigorous understanding of our “traintrack” integrals.

There are two key elements of a conference that are tricky to do on Zoom. You can’t do a conference dinner, so you can’t do the traditional joke-filled conference dinner speech. The end of the conference is also tricky: traditionally, this is when everyone applauds the organizers and the secretaries are given flowers. As chair for the last session, Lance Dixon stepped up to fill both gaps, with a closing speech that was both a touching tribute to the hard work of organizing the conference and a hilarious pile of in-jokes, including a participation award to Arkani-Hamed for his (unprecedented, as far as I’m aware) perfect attendance.

# The Sum of Our Efforts

I got a new paper out last week, with Andrew McLeod, Henrik Munch, and Georgios Papathanasiou.

A while back, some collaborators and I found an interesting set of Feynman diagrams that we called “Omega”. These Omega diagrams were fun because they let us avoid one of the biggest limitations of particle physics: that we usually have to compute approximations, diagram by diagram, rather than finding an exact answer. For these Omegas, we figured out how to add all the infinite set of Omega diagrams up together, with no approximation.

One implication of this was that, in principle, we now knew the answer for each individual Omega diagram, far past what had been computed before. However, writing down these answers was easier said than done. After some wrangling, we got the answer for each diagram in terms of an infinite sum. But despite tinkering with it for a while, even our resident infinite sum expert Georgios Papathanasiou couldn’t quite sum them up.

Naturally, this made me think the sums would make a great Master’s project.

When Henrik Munch showed up looking for a project, Andrew McLeod and I gave him several options, but he settled on the infinite sums. Impressively, he ended up solving the problem in two different ways!

First, he found an old paper none of us had seen before, that gave a general method for solving that kind of infinite sum. When he realized that method was really annoying to program, he took the principle behind it, called telescoping, and came up with his own, simpler method, for our particular case.

Picture an old-timey folding telescope. It might be long when fully extended, but when you fold it up each piece fits inside the previous one, resulting in a much smaller object. Telescoping a sum has the same spirit. If each pair of terms in a sum “fit together” (if their difference is simple), you can rearrange them so that most of the difficulty “cancels out” and you’re left with a much simpler sum.

Henrik’s telescoping idea worked even better than expected. We found that we could do, not just the Omega sums, but other sums in particle physics as well. Infinite sums are a very well-studied field, so it was interesting to find something genuinely new.

The rest of us worked to generalize the result, to check the examples and to put it in context. But the core of the work was Henrik’s. I’m really proud of what he accomplished. If you’re looking for a PhD student, he’s on the market!