# Reality as an Algebra of Observables

Listen to a physicist talk about quantum mechanics, and you’ll hear the word “observable”. Observables are, intuitively enough, things that can be observed. They’re properties that, in principle, one could measure in an experiment, like the position of a particle or its momentum. They’re the kinds of things linked by uncertainty principles, where the better you know one, the worse you know the other.

Some physicists get frustrated by this focus on measurements alone. They think we ought to treat quantum mechanics, not like a black box that produces results, but as information about some underlying reality. Instead of just observables, they want us to look for “beables“: not just things that can be observed, but things that something can be. From their perspective, the way other physicists focus on observables feels like giving up, like those physicists are abandoning their sacred duty to understand the world. Others, like the Quantum Bayesians or QBists, disagree, arguing that quantum mechanics really is, and ought to be, a theory of how individuals get evidence about the world.

I’m not really going to weigh in on that debate, I still don’t feel like I know enough to even write a decent summary. But I do think that one of the instincts on the “beables” side is wrong. If we focus on observables in quantum mechanics, I don’t think we’re doing anything all that unusual. Even in other parts of physics, we can think about reality purely in terms of observations. Doing so isn’t a dereliction of duty: often, it’s the most useful way to understand the world.

When we try to comprehend the world, we always start alone. From our time in the womb, we have only our senses and emotions to go on. With a combination of instinct and inference we start assembling a consistent picture of reality. Philosophers called phenomenologists (not to be confused with the physicists called phenomenologists) study this process in detail, trying to characterize how different things present themselves to an individual consciousness.

For my point here, these details don’t matter so much. That’s because in practice, we aren’t alone in understanding the world. Based on what others say about the world, we conclude they perceive much like we do, and we learn by their observations just as we learn by our own. We can make things abstract: instead of the specifics of how individuals perceive, we think about groups of scientists making measurements. At the end of this train lie observables: things that we as a community could in principle learn, and share with each other, ignoring the details of how exactly we measure them.

If each of these observables was unrelated, just scattered points of data, then we couldn’t learn much. Luckily, they are related. In quantum mechanics, some of these relationships are the uncertainty principles I mentioned earlier. Others relate measurements at different places, or at different times. The fancy way to refer to all these relationships is as an algebra: loosely, it’s something you can “do algebra with”, like you did with numbers and variables in high school. When physicists and mathematicians want to do quantum mechanics or quantum field theory seriously, they often talk about an “algebra of observables”, a formal way of thinking about all of these relationships.

Focusing on those two things, observables and how they are related, isn’t just useful in the quantum world. It’s an important way to think in other areas of physics too. If you’ve heard people talk about relativity, the focus on measurement screams out, in thought experiments full of abstract clocks and abstract yardsticks. Without this discipline, you find paradoxes, only to resolve them when you carefully track what each person can observe. More recently, physicists in my field have had success computing the chance particles collide by focusing on the end result, the actual measurements people can make, ignoring what might happen in between to cause that measurement. We can then break measurements down into simpler measurements, or use the structure of simpler measurements to guess more complicated ones. While we typically have done this in quantum theories, that’s not really a limitation: the same techniques make sense for problems in classical physics, like computing the gravitational waves emitted by colliding black holes.

With this in mind, we really can think of reality in those terms: not as a set of beable objects, but as a set of observable facts, linked together in an algebra of observables. Paring things down to what we can know in this way is more honest, and it’s also more powerful and useful. Far from a betrayal of physics, it’s the best advantage we physicists have in our quest to understand the world.

# A Tale of Two Donuts

I’ve got a new paper up this week, with Hjalte Frellesvig, Cristian Vergu, and Matthias Volk, about the elliptic integrals that show up in Feynman diagrams.

You can think of elliptic integrals as integrals over a torus, a curve shaped like the outer crust of a donut.

Integrals like these are showing up more and more in our field, the subject of bigger and bigger conferences. By now, we think we have a pretty good idea of how to handle them, but there are still some outstanding mysteries to solve.

One such mystery came up in a paper in 2017, by Luise Adams and Stefan Weinzierl. They were working with one of the favorite examples of this community, the so-called sunrise diagram (sunrise being a good time to eat donuts). And they noticed something surprising: if they looked at the sunrise diagram in different ways, it was described by different donuts.

What do I mean, different donuts?

The integrals we know best in this field aren’t integrals on a torus, but rather integrals on a sphere. In some sense, all spheres are the same: you can make them bigger or smaller, but they don’t have different shapes, they’re all “sphere-shaped”. In contrast, integrals on a torus are trickier, because toruses can have different shapes. Think about different donuts: some might have a thin ring, others a thicker one, even if the overall donut is the same size. You can’t just scale up one donut and get the other.

My colleague, Cristian Vergu, was annoyed by this. He’s the kind of person who trusts mathematics like an old friend, one who would never lead him astray. He thought that there must be one answer, one correct donut, one natural way to represent the sunrise diagram mathematically. I was skeptical, I don’t trust mathematics nearly as much as Cristian does. To sort it out, we brought in Hjalte Frellesvig and Matthias Volk, and started trying to write the sunrise diagram every way we possibly could. (Along the way, we threw in another “donut diagram”, the double-box, just to see what would happen.)

Rather than getting a zoo of different donuts, we got a surprise: we kept seeing the same two. And in the end, we stumbled upon the answer Cristian was hoping for: one of these two is, in a meaningful sense, the “correct donut”.

What was wrong with the other donut? It turns out when the original two donuts were found, one of them involved a move that is a bit risky mathematically, namely, combining square roots.

For readers who don’t know what I mean, or why this is risky, let me give a simple example. Everyone else can skip to after the torus gif.

Suppose I am solving a problem, and I find a product of two square roots:

$\sqrt{x}\sqrt{x}$

I could try combining them under the same square root sign, like so:

$\sqrt{x^2}$

That works, if $x$ is positive. But now suppose $x=-1$. Plug in negative one to the first expression, and you get,

$\sqrt{-1}\sqrt{-1}=i\times i=-1$

while in the second,

$\sqrt{(-1)^2}=\sqrt{1}=1$

In this case, it wasn’t as obvious that combining roots would change the donut. It might have been perfectly safe. It took some work to show that indeed, this was the root of the problem. If the roots are instead combined more carefully, then one of the donuts goes away, leaving only the one, true donut.

I’m interested in seeing where this goes, how many different donuts we have to understand and how they might be related. But I’ve also been writing about donuts for the last hour or so, so I’m getting hungry. See you next week!

# This Week, at Scattering-Amplitudes.com

I did a guest post this week, on an outreach site for the Max Planck Institute for Physics. The new Director of their Quantum Field Theory Department, Johannes Henn, has been behind a lot of major developments in scattering amplitudes. He was one of the first to notice just how symmetric N=4 super Yang-Mills is, as well as the first to build the “hexagon functions” that would become my stock-in-trade. He’s also done what we all strive to do, and applied what he learned to the real world, coming up with an approach to differential equations that has become the gold standard for many different amplitudes calculations.

Now in his new position, he has a swanky new outreach site, reached at the conveniently memorable scattering-amplitudes.com and managed by outreach-ologist Sorana Scholtes. They started a fun series recently called “Talking Terms” as a kind of glossary, explaining words that physicists use over and over again. My guest post for them is part of that series. It hearkens all the way back to one of my first posts, defining what “theory” means to a theoretical physicist. It covers something new as well, a phrase I don’t think I’ve ever explained on this blog: “working in a theory”. You can check it out on their site!

# A Physicist New Year

Happy New Year to all!

Physicists celebrate the new year by trying to sneak one last paper in before the year is over. Looking at Facebook last night I saw three different friends preview the papers they just submitted. The site where these papers appear, arXiv, had seventy new papers this morning, just in the category of theoretical high-energy physics. Of those, nine of them were in my, or a closely related subfield.

I’d love to tell you all about these papers (some exciting! some long-awaited!), but I’m still tired from last night and haven’t read them yet. So I’ll just close by wishing you all, once again, a happy new year.

# QCD Meets Gravity 2020, Retrospective

I was at a Zoomference last week, called QCD Meets Gravity, about the many ways gravity can be thought of as the “square” of other fundamental forces. I didn’t have time to write much about the actual content of the conference, so I figured I’d say a bit more this week.

A big theme of this conference, as in the past few years, was gravitational waves. From LIGO’s first announcement of a successful detection, amplitudeologists have been developing new methods to make predictions for gravitational waves more efficient. It’s a field I’ve dabbled in a bit myself. Last year’s QCD Meets Gravity left me impressed by how much progress had been made, with amplitudeologists already solidly part of the conversation and able to produce competitive results. This year felt like another milestone, in that the amplitudeologists weren’t just catching up with other gravitational wave researchers on the same kinds of problems. Instead, they found new questions that amplitudes are especially well-suited to answer. These included combining two pieces of these calculations (“potential” and “radiation”) that the older community typically has to calculate separately, using an old quantum field theory trick, finding the gravitational wave directly from amplitudes, and finding a few nice calculations that can be used to “generate” the rest.

A large chunk of the talks focused on different “squaring” tricks (or as we actually call them, double-copies). There were double-copies for cosmology and conformal field theory, for the celestial sphere, and even some version of M theory. There were new perspectives on the double-copy, new building blocks and algebraic structures that lie behind it. There were talks on the so-called classical double-copy for space-times, where there have been some strange discoveries (an extra dimension made an appearance) but also a more rigorous picture of where the whole thing comes from, using twistor space. There were not one, but two talks linking the double-copy to the Navier-Stokes equation describing fluids, from two different groups. (I’m really curious whether these perspectives are actually useful for practical calculations about fluids, or just fun to think about.) Finally, while there wasn’t a talk scheduled on this paper, the authors were roped in by popular demand to talk about their work. They claim to have made progress on a longstanding puzzle, how to show that double-copy works at the level of the Lagrangian, and the community was eager to dig into the details.

From there, a grab-bag of talks covered other advancements. There were talks from string theorists and ambitwistor string theorists, from Effective Field Theorists working on gravity and the Standard Model, from calculations in N=4 super Yang-Mills, QCD, and scalar theories. Simon Caron-Huot delved into how causality constrains the theories we can write down, showing an interesting case where the common assumption that all parameters are close to one is actually justified. Nima Arkani-Hamed began his talk by saying he’d surprise us, which he certainly did (and not by keeping on time). It’s tricky to explain why his talk was exciting. Comparing to his earlier discovery of the Amplituhedron, which worked for a toy model, this is a toy calculation in a toy model. While the Amplituhedron wasn’t based on Feynman diagrams, this can’t even be compared with Feynman diagrams. Instead of expanding in a small coupling constant, this expands in a parameter that by all rights should be equal to one. And instead of positivity conditions, there are negativity conditions. All I can say is that with all of that in mind, it looks like real progress on an important and difficult problem from a totally unanticipated direction. In a speech summing up the conference, Zvi Bern mentioned a few exciting words from Nima’s talk: “nonplanar”, “integrated”, “nonperturbative”. I’d add “differential equations” and “infinite sums of ladder diagrams”. Nima and collaborators are trying to figure out what happens when you sum up all of the Feynman diagrams in a theory. I’ve made progress in the past for diagrams with one “direction”, a ladder that grows as you add more loops, but I didn’t know how to add “another direction” to the ladder. In very rough terms, Nima and collaborators figured out how to add that direction.

I’ve probably left things out here, it was a packed conference! It’s been really fun seeing what the community has cooked up, and I can’t wait to see what happens next.

# QCD Meets Gravity 2020

I’m at another Zoom conference this week, QCD Meets Gravity. This year it’s hosted by Northwestern.

QCD Meets Gravity is a conference series focused on the often-surprising links between quantum chromodynamics on the one hand and gravity on the other. By thinking of gravity as the “square” of forces like the strong nuclear force, researchers have unlocked new calculation techniques and deep insights.

Last year’s conference was very focused on one particular topic, trying to predict the gravitational waves observed by LIGO and VIRGO. That’s still a core topic of the conference, but it feels like there is a bit more diversity in topics this year. We’ve seen a variety of talks on different “squares”: new theories that square to other theories, and new calculations that benefit from “squaring” (even surprising applications to the Navier-Stokes equation!) There are talks on subjects from String Theory to Effective Field Theory, and even a talk on a very different way that “QCD meets gravity”, in collisions of neutron stars.

With still a few more talks to go, expect me to say a bit more next week, probably discussing a few in more detail. (Several people presented exciting work in progress!) Until then, I should get back to watching!

# At “Antidifferentiation and the Calculation of Feynman Amplitudes”

I was at a conference this week, called Antidifferentiation and the Calculation of Feynman Amplitudes. The conference is a hybrid kind of affair: I attended via Zoom, but there were seven or so people actually there in the room (the room in question being at DESY Zeuthen, near Berlin).

The road to this conference was a bit of a roller-coaster. It was originally scheduled for early March. When the organizers told us they were postponing it, they seemed at the time a little overcautious…until the world proved me, and all of us, wrong. They rescheduled for October, and as more European countries got their infection rates down it looked like the conference could actually happen. We booked rooms at the DESY guest house, until it turned out they needed the space to keep the DESY staff socially distanced, and we quickly switched to booking at a nearby hotel.

Then Europe’s second wave hit. Cases in Denmark started to rise, so Germany imposed a quarantine on entry from Copenhagen and I switched to remote participation. Most of the rest of the participants did too, even several in Germany. For the few still there in person they have a variety of measures to stop infection, from fixed seats in the conference room to gloves for the coffee machine.

The content has been interesting. It’s an eclectic mix of review talks and talks on recent research, all focused on different ways to integrate (or, as one of the organizers emphasized, antidifferentiate) functions in quantum field theory. I’ve learned about the history of the field, and gotten a better feeling for the bottlenecks in some LHC-relevant calculations.

This week was also the announcement of the Physics Nobel Prize. I’ll do my traditional post on it next week, but for now, congratulations to Penrose, Genzel, and Ghez!

# To Elliptics and Beyond!

I’ve been busy running a conference this week, Elliptics and Beyond.

After Amplitudes was held online this year, a few of us at the Niels Bohr Institute were inspired. We thought this would be the perfect time to hold a small online conference, focused on the Calabi-Yaus that have been popping up lately in Feynman diagrams. Then we heard from the organizers of Elliptics 2020. They had been planning to hold a conference in Mainz about elliptic integrals in Feynman diagrams, but had to postpone it due to the pandemic. We decided to team up and hold a joint conference on both topics: the elliptic integrals that are just starting to be understood, and the mysterious integrals that lie beyond. Hence, Elliptics and Beyond.

The conference has been fun thus far. There’s been a mix of review material bringing people up to speed on elliptic integrals and exciting new developments. Some are taking methods that have been successful in other areas and generalizing them to elliptic integrals, others have been honing techniques for elliptics to make them “production-ready”. A few are looking ahead even further, to higher-genus amplitudes in string theory and Calabi-Yaus in Feynman diagrams.

We organized the conference along similar lines to Zoomplitudes, but with a few experiments of our own. Like Zoomplitudes, we made a Slack space for the conference, so people could chat physics outside the talks. Ours was less active, though. I suspect that kind of space needs a critical mass of people, and with a smaller conference we may just not have gotten there. Having fewer people did allow us a more relaxed schedule, which in turn meant we could mostly keep things on-time. We had discussion sessions in the morning (European time), with talks in the afternoon, so almost everyone could make the talks at least. We also had a “conference dinner”, which went much better than I would have expected. We put people randomly into Zoom Breakout Rooms of five or six, to emulate the tables of an in-person conference, and folks chatted while eating their (self-brought of course) dinner. People seemed to really enjoy the chance to just chat casually with the other folks at the conference. If you’re organizing an online conference soon, I’d recommend trying it!

Holding a conference online means that a lot of people can attend who otherwise couldn’t. We had over a hundred people register, and while not all of them showed up there were typically fifty or sixty people on the Zoom session. Some of these were specialists in elliptics or Calabi-Yaus who wouldn’t ordinarily make it to a conference like this. Others were people from the rest of the amplitudes field who joined for parts of the conference that caught their eye. But surprisingly many weren’t even amplitudeologists, but students and young researchers in a variety of topics from all over the world. Some seemed curious and eager to learn, others I suspect just needed to say they had been to a conference. Both are responding to a situation where suddenly conference after conference is available online, free to join. It will be interesting to see if, and how, the world adapts.

# A Non-Amplitudish Solution to an Amplitudish Problem

There was an interesting paper last week, claiming to solve a long-standing problem in my subfield.

I calculate what are called scattering amplitudes, formulas that tell us the chance that two particles scatter off each other. Formulas like these exist for theories like the strong nuclear force, called Yang-Mills theories, they also exist for the hypothetical graviton particles of gravity. One of the biggest insights in scattering amplitude research in the last few decades is that these two types of formulas are tied together: as we like to say, gravity is Yang-Mills squared.

A huge chunk of my subfield grew out of that insight. For one, it’s why some of us think we have something useful to say about colliding black holes. But while it’s been used in a dozen different ways, an important element was missing: the principle was never actually proven (at least, not in the way it’s been used).

Now, a group in the UK and the Czech Republic claims to have proven it.

I say “claims” not because I’m skeptical, but because without a fair bit more reading I don’t think I can judge this one. That’s because the group, and the approach they use, isn’t “amplitudish”. They aren’t doing what amplitudes researchers would do.

In the amplitudes subfield, we like to write things as much as possible in terms of measurable, “on-shell” particles. This is in contrast to the older approach that writes things instead in terms of more general quantum fields, with formulas called Lagrangians to describe theories. In part, we avoid the older Lagrangian framing to avoid redundancy: there are many different ways to write a Lagrangian for the exact same physics. We have another reason though, which might seem contradictory: we avoid Lagrangians to stay flexible. There are many ways to rewrite scattering amplitudes that make different properties manifest, and some of the strangest ones don’t seem to correspond to any Lagrangian at all.

If you’d asked me before last week, I’d say that “gravity is Yang-Mills squared” was in that category: something you couldn’t make manifest fully with just a Lagrangian, that you’d need some stranger magic to prove. If this paper is right, then that’s wrong: if you’re careful enough you can prove “gravity is Yang-Mills squared” in the old-school, Lagrangian way.

I’m curious how this is going to develop: what amplitudes people will think about it, what will happen as the experts chime in. For now, as mentioned, I’m reserving judgement, except to say “interesting if true”.

# Science as Hermeneutics: Closer Than You’d Think

This post is once again inspired by a Ted Chiang short story. This time, it’s “The Evolution of Human Science”, which imagines a world in which super-intelligent “metahumans” have become incomprehensible to the ordinary humans they’ve left behind. Human scientists in that world practice “hermeneutics“: instead of original research, they try to interpret what the metahumans are doing, reverse-engineering their devices and observing their experiments.

It’s a thought-provoking view of what science in the distant future could become. But it’s also oddly familiar.

You might think I’m talking about machine learning here. It’s true that in recent years people have started using machine learning in science, with occasionally mysterious results. There are even a few cases of physicists using machine-learning to suggest some property, say of Calabi-Yau manifolds, and then figuring out how to prove it. It’s not hard to imagine a day when scientists are reduced to just interpreting whatever the AIs throw at them…but I don’t think we’re quite there yet.

Instead, I’m thinking about my own work. I’m a particular type of theoretical physicist. I calculate scattering amplitudes, formulas that tell us the probabilities that subatomic particles collide in different ways. We have a way to calculate these, Feynman’s famous diagrams, but they’re inefficient, so researchers like me look for shortcuts.

How do we find those shortcuts? Often, it’s by doing calculations the old, inefficient way. We use older methods, look at the formulas we get, and try to find patterns. Each pattern is a hint at some new principle that can make our calculations easier. Sometimes we can understand the pattern fully, and prove it should hold. Other times, we observe it again and again and tentatively assume it will keep going, and see what happens if it does.

Either way, this isn’t so different from the hermeneutics scientists practice in the story. Feynman diagrams already “know” every pattern we find, like the metahumans in the story who already know every result the human scientists can discover. But that “knowledge” isn’t in a form we can understand or use. We have to learn to interpret it, to read between the lines and find underlying patterns, to end up with something we can hold in our own heads and put into action with our own hands. The truth may be “out there”, but scientists can’t be content with that. We need to get the truth “in here”. We need to interpret it for ourselves.