After Amplitudes was held online this year, a few of us at the Niels Bohr Institute were inspired. We thought this would be the perfect time to hold a small online conference, focused on the Calabi-Yaus that have been poppinguplately in Feynman diagrams. Then we heard from the organizers of Elliptics 2020. They had been planning to hold a conference in Mainz about elliptic integrals in Feynman diagrams, but had to postpone it due to the pandemic. We decided to team up and hold a joint conference on both topics: the elliptic integrals that are just starting to be understood, and the mysterious integrals that lie beyond. Hence, Elliptics and Beyond.
The conference has been fun thus far. There’s been a mix of review material bringing people up to speed on elliptic integrals and exciting new developments. Some are taking methods that have been successful in other areas and generalizing them to elliptic integrals, others have been honing techniques for elliptics to make them “production-ready”. A few are looking ahead even further, to higher-genus amplitudes in string theory and Calabi-Yaus in Feynman diagrams.
We organized the conference along similar lines to Zoomplitudes, but with a few experiments of our own. Like Zoomplitudes, we made a Slack space for the conference, so people could chat physics outside the talks. Ours was less active, though. I suspect that kind of space needs a critical mass of people, and with a smaller conference we may just not have gotten there. Having fewer people did allow us a more relaxed schedule, which in turn meant we could mostly keep things on-time. We had discussion sessions in the morning (European time), with talks in the afternoon, so almost everyone could make the talks at least. We also had a “conference dinner”, which went much better than I would have expected. We put people randomly into Zoom Breakout Rooms of five or six, to emulate the tables of an in-person conference, and folks chatted while eating their (self-brought of course) dinner. People seemed to really enjoy the chance to just chat casually with the other folks at the conference. If you’re organizing an online conference soon, I’d recommend trying it!
Holding a conference online means that a lot of people can attend who otherwise couldn’t. We had over a hundred people register, and while not all of them showed up there were typically fifty or sixty people on the Zoom session. Some of these were specialists in elliptics or Calabi-Yaus who wouldn’t ordinarily make it to a conference like this. Others were people from the rest of the amplitudes field who joined for parts of the conference that caught their eye. But surprisingly many weren’t even amplitudeologists, but students and young researchers in a variety of topics from all over the world. Some seemed curious and eager to learn, others I suspect just needed to say they had been to a conference. Both are responding to a situation where suddenly conference after conference is available online, free to join. It will be interesting to see if, and how, the world adapts.
Listen to a certain flavor of crackpot, or a certain kind of science fiction, and you’ll hear about zero-point energy. Limitless free energy drawn from quantum space-time itself, zero-point energy probably sounds like bullshit. Often it is. But lurking behind the pseudoscience and the fiction is a real physics concept, albeit one that doesn’t really work like those people imagine.
In quantum mechanics, the zero-point energy is the lowest energy a particular system can have. That number doesn’t actually have to be zero, even for empty space. People sometimes describe this in terms of so-called virtual particles, popping up from nothing in particle-antiparticle pairs only to annihilate each other again, contributing energy in the absence of any “real particles”. There’s a real force, the Casimir effect, that gets attributed to this, a force that pulls two metal plates together even with no charge or extra electromagnetic field. The same bubbling of pairs of virtual particles also gets used to explain the Hawking radiation of black holes.
I’d like to try explaining all of these things in a different way, one that might clear up some common misconceptions. To start, let’s talk about, not zero-point energy, but zero-point diagrams.
Feynman diagrams are a tool we use to study particle physics. We start with a question: if some specific particles come together and interact, what’s the chance that some (perhaps different) particles emerge? We start by drawing lines representing the particles going in and out, then connect them in every way allowed by our theory. Finally we translate the diagrams to numbers, to get an estimate for the probability. In particle physics slang, the number of “points” is the total number of particles: particles in, plus particles out. For example, let’s say we want to know the chance that two electrons go in and two electrons come out. That gives us a “four-point” diagram: two in, plus two out. A zero-point diagram, then, means zero particles in, zero particles out.
(Note that this isn’t why zero-point energy is called zero-point energy, as far as I can tell. Zero-point energy is an older term from before Feynman diagrams.)
Remember, each Feynman diagram answers a specific question, about the chance of particles behaving in a certain way. You might wonder, what question does a zero-point diagram answer? The chance that nothing goes to nothing? Why would you want to know that?
To answer, I’d like to bring up some friends of mine, who do something that might sound equally strange: they calculate one-point diagrams, one particle goes to none. This isn’t strange for them because they study theories with defects.
Normally in particle physics, we think about our particles in an empty, featureless space. We don’t have to, though. One thing we can do is introduce features in this space, like walls and mirrors, and try to see what effect they have. We call these features “defects”.
If there’s a defect like that, then it makes sense to calculate a one-point diagram, because your one particle can interact with something that’s not a particle: it can interact with the defect.
You might see where this is going: let’s say you think there’s a force between two walls, that comes from quantum mechanics, and you want to calculate it. You could imagine it involves a diagram like this:
Roughly speaking, this is the kind of thing you could use to calculate the Casimir effect, that mysterious quantum force between metal plates. And indeed, it involves a zero-point diagram.
Here’s the thing, though: metal plates aren’t just “defects”. They’re real physical objects, made of real physical particles. So while you can think of the Casimir effect with a “zero-point diagram” like that, you can also think of it with a normal diagram, more like the four-point diagram I showed you earlier: one that computes, not a force between defects, but a force between the actual electrons and protons that make up the two plates.
A lot of the time when physicists talk about pairs of virtual particles popping up out of the vacuum, they have in mind a picture like this. And often, you can do the same trick, and think about it instead as interactions between physical particles. There’s a story of roughly this kind for Hawking radiation: you can think of a black hole event horizon as “cutting in half” a zero-point diagram, and see pairs of particles going out from the black hole…but you can also do a calculation that looks more like particles interacting with a gravitational field.
This also might help you understand why, contra the crackpots and science fiction writers, zero-point energy isn’t a source of unlimited free energy. Yes, a force like the Casimir effect comes “from the vacuum” in some sense. But really, it’s a force between two particles. And just like the gravitational force between two particles, this doesn’t give you unlimited free power. You have to do the work to move the particles back over and over again, using the same amount of power you gained from the force to begin with. And unlike the forces you’re used to, these are typically very small effects, as usual for something that depends on quantum mechanics. So it’s even less useful than more everyday forces for this.
Why do so many crackpots and authors expect zero-point energy to be a massive source of power? In part, this is due to mistakes physicists made early on.
Sometimes, when calculating a zero-point diagram (or any other diagram), we don’t get a sensible number. Instead, we get infinity. Physicists used to be baffled by this. Later, they understood the situation a bit better, and realized that those infinities were probably just due to our ignorance. We don’t know the ultimate high-energy theory, so it’s possible something happens at high energies to cancel those pesky infinities. Without knowing exactly what happened, physicists would estimate by using a “cutoff” energy where they expected things to change.
That kind of calculation led to an estimate you might have heard of, that the zero-point energy inside single light bulb could boil all the world’s oceans. That estimate gives a pretty impressive mental image…but it’s also wrong.
This kind of estimate led to “the worst theoretical prediction in the history of physics”, that the cosmological constant, the force that speeds up the expansion of the universe, is 120 orders of magnitude higher than its actual value (if it isn’t just zero). If there really were energy enough inside each light bulb to boil the world’s oceans, the expansion of the universe would be quite different than what we observe.
At this point, it’s pretty clear there is something wrong with these kinds of “cutoff” estimates. The only unclear part is whether that’s due to something subtle or something obvious. But either way, this particular estimate is just wrong, and you shouldn’t take it seriously. Zero-point energy exists, but it isn’t the magical untapped free energy you hear about in stories. It’s tiny quantum corrections to the forces between particles.
I calculate what are called scattering amplitudes, formulas that tell us the chance that two particles scatter off each other. Formulas like these exist for theories like the strong nuclear force, called Yang-Mills theories, they also exist for the hypothetical graviton particles of gravity. One of the biggest insights in scattering amplitude research in the last few decades is that these two types of formulas are tied together: as we like to say, gravity is Yang-Mills squared.
A huge chunk of my subfield grew out of that insight. For one, it’s why some of us think we have something useful to say about colliding black holes. But while it’s been used in a dozen different ways, an important element was missing: the principle was never actually proven (at least, not in the way it’s been used).
Now, a group in the UK and the Czech Republic claims to have proven it.
I say “claims” not because I’m skeptical, but because without a fair bit more reading I don’t think I can judge this one. That’s because the group, and the approach they use, isn’t “amplitudish”. They aren’t doing what amplitudes researchers would do.
In the amplitudes subfield, we like to write things as much as possible in terms of measurable, “on-shell” particles. This is in contrast to the older approach that writes things instead in terms of more general quantum fields, with formulas called Lagrangians to describe theories. In part, we avoid the older Lagrangian framing to avoid redundancy: there are many different ways to write a Lagrangian for the exact same physics. We have another reason though, which might seem contradictory: we avoid Lagrangians to stay flexible. There are many ways to rewrite scattering amplitudes that make different properties manifest, and some of the strangest ones don’t seem to correspond to any Lagrangian at all.
If you’d asked me before last week, I’d say that “gravity is Yang-Mills squared” was in that category: something you couldn’t make manifest fully with just a Lagrangian, that you’d need some stranger magic to prove. If this paper is right, then that’s wrong: if you’re careful enough you can prove “gravity is Yang-Mills squared” in the old-school, Lagrangian way.
I’m curious how this is going to develop: what amplitudes people will think about it, what will happen as the experts chime in. For now, as mentioned, I’m reserving judgement, except to say “interesting if true”.
Recently, a commenter asked me what physicists mean when they say two forces unify. While typing up a response, I came across this passage, in a science fiction short story by Ted Chiang.
Physics admits of a lovely unification, not just at the level of fundamental forces, but when considering its extent and implications. Classifications like ‘optics’ or ‘thermodynamics’ are just straitjackets, preventing physicists from seeing countless intersections.
This passage sounds nice enough, but I feel like there’s a misunderstanding behind it. When physicists seek after unification, we’re talking about something quite specific. It’s not merely a matter of two topics intersecting, or describing them with the same math. We already plumb intersections between fields, including optics and thermodynamics. When we hope to find a unified theory, we do so because it does something. A real unified theory doesn’t just aid our calculations, it gives us new ways to alter the world.
To show you what I mean, let me start with something physicists already know: electroweak unification.
You might have heard of four fundamental forces: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. You might have also heard that two of these forces are unified: the electromagnetic force and the weak nuclear force form something called the electroweak force.
What does it mean that these forces are unified? How does it work?
Zoom in far enough, and you don’t see the electromagnetic force and the weak force anymore. Instead you see two different forces, I’ll call them “W” and “B”. You’ll also see the Higgs field. And crucially, you’ll see the “W” and “B” forces interact with the Higgs.
The Higgs field is special because it has what’s called a “vacuum” value. Even in otherwise empty space, there’s some amount of “Higgsness” in the background, like the color of a piece of construction paper. This background Higgs-ness is in some sense an accident, just one stable way the universe happens to sit. In particular, it picks out an arbitrary kind of direction: parts of the “W” and “B” forces happen to interact with it, and parts don’t.
Now let’s zoom back out. We could, if we wanted, keep our eyes on the “W” and “B” forces. But that gets increasingly silly. As we zoom out we won’t be able to see the Higgs field anymore. Instead, we’ll just see different parts of the “W” and “B” behaving in drastically different ways, depending on whether or not they interact with the Higgs. It will make more sense to talk about mixes of the “W” and “B” fields, to distinguish the parts that are “lined up” with the background Higgs and the parts that aren’t. It’s like using “aft” and “starboard” on a boat. You could use “north” and “south”, but that would get confusing pretty fast.
What are those “mixes” of the “W” and “B” forces? Why, they’re the weak nuclear force and the electromagnetic force!
This, broadly speaking, is the kind of unification physicists look for. It doesn’t have to be a “mix” of two different forces: most of the models physicists imagine start with a single force. But the basic ideas are the same: that if you “zoom in” enough you see a simpler model, but that model is interacting with something that “by accident” picks a particular direction, so that as we zoom out different parts of the model behave in different ways. In that way, you could get from a single force to all the different forces we observe.
That “by accident” is important here, because that accident can be changed. That’s why I said earlier that real unification lets us alter the world.
To be clear, we can’t change the background Higgs field with current technology. The biggest collider we have can just make a tiny, temporary fluctuation (that’s what the Higgs boson is). But one implication of electroweak unification is that, with enough technology, we could. Because those two forces are unified, and because that unification is physical, with a physical cause, it’s possible to alter that cause, to change the mix and change the balance. This is why this kind of unification is such a big deal, why it’s not the sort of thing you can just chalk up to “interpretation” and ignore: when two forces are unified in this way, it lets us do new things.
Mathematical unification is valuable. It’s great when we can look at different things and describe them in the same language, or use ideas from one to understand the other. But it’s not the same thing as physical unification. When two forces really unify, it’s an undeniable physical fact about the world. When two forces unify, it does something.
In the past, what did we know about eel reproduction? What do we know now?
The answer to both questions is, surprisingly little! For those who don’t know the story, I recommend this New Yorker article. Eels turn out to have a quite complicated life cycle, and can only reproduce in the very last stage. Different kinds of eels from all over Europe and the Americas spawn in just one place: the Sargasso Sea. But while researchers have been able to find newborn eels in those waters, and more recently track a few mature adults on their migration back, no-one has yet observed an eel in the act. Biologists may be able to infer quite a bit, but with no direct evidence yet the truth may be even more surprising than they expect. The details of eel reproduction are an ongoing mystery, the “eel question” one of the field’s most enduring.
But of course this isn’t an eel blog. I’m here to answer a different question.
In the past, what did we know about the Higgs boson? What do we know now?
Ask some physicists, and they’ll say that even before the LHC everyone knew the Higgs existed. While this isn’t quite true, it is certainly true that something like the Higgs boson had to exist. Observations of other particles, the W and Z bosons in particular, gave good evidence for some kind of “Higgs mechanism”, that gives other particles mass in a “Higgs-like-way”. A Higgs boson was in some sense the simplest option, but there could have been more than one, or a different sort of process instead. Some of these alternatives may have been sensible, others as silly as believing that eels come from horses’ tails. Until 2012, when the Higgs boson was observed, we really didn’t know.
We also didn’t know one other piece of information: the Higgs boson’s mass. That tells us, among other things, how much energy we need to make one. Physicists were pretty sure the LHC was capable of producing a Higgs boson, but they weren’t sure where or how they’d find it, or how much energy would ultimately be involved.
Now thanks to the LHC, we know the mass of the Higgs boson, and we can rule out some of the “alternative” theories. But there’s still quite a bit we haven’t observed. In particular, we haven’t observed many of the Higgs boson’s couplings.
The couplings of a quantum field are how it interacts, both with other quantum fields and with itself. In the case of the Higgs, interacting with other particles gives those particles mass, while interacting with itself is how it itself gains mass. Since we know the masses of these particles, we can infer what these couplings should be, at least for the simplest model. But, like the eels, the truth may yet surprise us. Nothing guarantees that the simplest model is the right one: what we call simplicity is a judgement based on aesthetics, on how we happen to write models down. Nature may well choose differently. All we can honestly do is parametrize our ignorance.
In the case of the eels, each failure to observe their reproduction deepens the mystery. What are they doing that is so elusive, so impossible to discover? In this, eels are different from the Higgs boson. We know why we haven’t observed the Higgs boson coupling to itself, at least according to our simplest models: we’d need a higher-energy collider, more powerful than the LHC, to see it. That’s an expensive proposition, much more expensive than using satellites to follow eels around the ocean. Because our failure to observe the Higgs self-coupling is itself no mystery, our simplest models could still be correct: as theorists, we probably have it easier than the biologists. But if we want to verify our models in the real world, we have it much harder.
The conference opened with a talk by Gavin Salam, there as an ambassador for LHC physics. Salam pointed out that, while a decent proportion of speakers at Amplitudes mention the LHC in their papers, that fraction has fallen over the years. (Another speaker jokingly wondered which of those mentions were just in the paper’s introduction.) He argued that there is still useful work for us, LHC measurements that will require serious amplitudes calculations to understand. He also brought up what seems like the most credible argument for a new, higher-energy collider: that there are important properties of the Higgs, in particular its interactions, that we still have not observed.
The next few talks hopefully warmed Salam’s heart, as they featured calculations for real-world particle physics. Nathaniel Craig and Yael Shadmi in particular covered the link between amplitudes and Standard Model Effective Field Theory (SMEFT), a method to systematically characterize corrections beyond the Standard Model. Shadmi’s talk struck me because the kind of work she described (building the SMEFT “amplitudes-style”, directly from observable information rather than more complicated proxies) is something I’d seen people speculate about for a while, but which hadn’t been done until quite recently. Now, several groups have managed it, and look like they’ve gotten essentially “all the way there”, rather than just partial results that only manage to replicate part of the SMEFT. Overall it’s much faster progress than I would have expected.
After Shadmi’s talk was a brace of talks on N=4 super Yang-Mills, featuring cosmic Galois theory and an impressively groan-worthy “origin story” joke. The final talk of the day, by Hofie Hannesdottir, covered work with some of my colleagues at the NBI. Due to coronavirus I hadn’t gotten to hear about this in person, so it was good to hear a talk on it, a blend of old methods and new priorities to better understand some old discoveries.
The next day focused on a topic that has grown in importance in our community, calculations for gravitational wave telescopes like LIGO. Several speakers focused on new methods for collisions of spinning objects, where a few different approaches are making good progress (Radu Roiban’s proposal to use higher-spin field theory was particularly interesting) but things still aren’t quite “production-ready”. The older, post-Newtonian method is still very much production-ready, as evidenced by Michele Levi’s talk that covered, among other topics, our recentcollaboration. Julio Parra-Martinez discussed some interesting behavior shared by both supersymmetric and non-supersymmetric gravity theories. Thibault Damour had previously expressed doubts about use of amplitudes methods to answer this kind of question, and part of Parra-Martinez’s aim was to confirm the calculation with methods Damour would consider more reliable. Damour (who was actually in the audience, which I suspect would not have happened at an in-person conference) had already recanted some related doubts, but it’s not clear to me whether that extended to the results Parra-Martinez discussed (or whether Damour has stated the problem with his old analysis).
There were a few talks that day that didn’t relate to gravitational waves, though this might have been an accident, since both speakers also work on that topic. Zvi Bern’s talk linked to the previous day’s SMEFT discussion, with a calculation using amplitudes methods of direct relevance to SMEFT researchers. Clifford Cheung’s talk proposed a rather strange/fun idea, conformal symmetry in negative dimensions!
Wednesday was “amplituhedron day”, with a variety of talks on positive geometries and cluster algebras. Featured in several talks was “tropicalization“, a mathematical procedure that can simplify complicated geometries while still preserving essential features. Here, it was used to trim down infinite “alphabets” conjectured for some calculations into a finite set, and in doing so understand the origin of “square root letters”. The day ended with a talk by Nima Arkani-Hamed, who despite offering to bet that he could finish his talk within the half-hour slot took almost twice that. The organizers seemed to have planned for this, since there was one fewer talk that day, and as such the day ended at roughly the usual time regardless.
For lack of a better name, I’ll call Thursday’s theme “celestial”. The day included talks by cosmologists (including approaches using amplitudes-ish methods from Daniel Baumann and Charlotte Sleight, and a curiously un-amplitudes-related talk from Daniel Green), talks on “celestial amplitudes” (amplitudes viewed from the surface of an infinitely distant sphere), and various talks with some link to string theory. I’m including in that last category intersection theory, which has really become its own thing. This included a talk by Simon Caron-Huot about using intersection theory more directly in understanding Feynman integrals, and a talk by Sebastian Mizera using intersection theory to investigate how gravity is Yang-Mills squared. Both gave me a much better idea of the speakers’ goals. In Mizera’s case he’s aiming for something very ambitious. He wants to use intersection theory to figure out when and how one can “double-copy” theories, and might figure out why the procedure “got stuck” at five loops. The day ended with a talk by Pedro Vieira, who gave an extremely lucid and well-presented “blackboard-style” talk on bootstrapping amplitudes.
Friday was a grab-bag of topics. Samuel Abreu discussed an interesting calculation using the numerical unitarity method. It was notable in part because renormalization played a bigger role than it does in most amplitudes work, and in part because they now have a cool logo for their group’s software, Caravel. Claude Duhr and Ruth Britto gave a two-part talk on their work on a Feynman integral coaction. I’d had doubts about the diagrammatic coaction they had worked on in the past because it felt a bit ad-hoc. Now, they’re using intersection theory, and have a clean story that seems to tie everything together. Andrew McLeod talked about our work on a Feynman diagram Calabi-Yau “bestiary”, while Cristian Vergu had a more rigorous understanding of our “traintrack” integrals.
There are two key elements of a conference that are tricky to do on Zoom. You can’t do a conference dinner, so you can’t do the traditional joke-filled conference dinner speech. The end of the conference is also tricky: traditionally, this is when everyone applauds the organizers and the secretaries are given flowers. As chair for the last session, Lance Dixon stepped up to fill both gaps, with a closing speech that was both a touching tribute to the hard work of organizing the conference and a hilarious pile of in-jokes, including a participation award to Arkani-Hamed for his (unprecedented, as far as I’m aware) perfect attendance.
One implication of this was that, in principle, we now knew the answer for each individual Omega diagram, far past what had been computed before. However, writing down these answers was easier said than done. After some wrangling, we got the answer for each diagram in terms of an infinite sum. But despite tinkering with it for a while, even our resident infinite sum expert Georgios Papathanasiou couldn’t quite sum them up.
Naturally, this made me think the sums would make a great Master’s project.
When Henrik Munch showed up looking for a project, Andrew McLeod and I gave him several options, but he settled on the infinite sums. Impressively, he ended up solving the problem in two different ways!
First, he found an old paper none of us had seen before, that gave a general method for solving that kind of infinite sum. When he realized that method was really annoying to program, he took the principle behind it, called telescoping, and came up with his own, simpler method, for our particular case.
Picture an old-timey folding telescope. It might be long when fully extended, but when you fold it up each piece fits inside the previous one, resulting in a much smaller object. Telescoping a sum has the same spirit. If each pair of terms in a sum “fit together” (if their difference is simple), you can rearrange them so that most of the difficulty “cancels out” and you’re left with a much simpler sum.
Henrik’s telescoping idea worked even better than expected. We found that we could do, not just the Omega sums, but other sums in particle physics as well. Infinite sums are a very well-studied field, so it was interesting to find something genuinely new.
The rest of us worked to generalize the result, to check the examples and to put it in context. But the core of the work was Henrik’s. I’m really proud of what he accomplished. If you’re looking for a PhD student, he’s on the market!
Two weeks ago, I told you that Andrew and Michèle and I had written a paper, predicting what gravitational wave telescopes like LIGO see when black holes collide. You may remember that LIGO doesn’t just see colliding black holes: it sees colliding neutron stars too. So why didn’t we predict what happens when neutron stars collide?
Actually, we did. Our calculation doesn’t just apply to black holes. It applies to neutron stars too. And not just neutron stars: it applies to anything of roughly the right size and shape. Black holes, neutron stars, very large grapefruits…
That’s the magic of Effective Field Theory, the “zoom lens” of particle physics. Zoom out far enough, and any big, round object starts looking like a particle. Black holes, neutron stars, grapefruits, we can describe them all using the same math.
Ok, so we can describe both black holes and neutron stars. Can we tell the difference between them?
In our last calculation, no. In this one, yes!
Effective Field Theory isn’t just a zoom lens, it’s a controlled approximation. That means that when we “zoom out” we don’t just throw out anything “too small to see”. Instead, we approximate it, estimating how big of an effect it can have. Depending on how precise we want to be, we can include more and more of these approximated effects. If our estimates are good, we’ll include everything that matters, and get a good approximation for what we’re trying to observe.
At the precision of our last calculation, a black hole and a neutron star still look exactly the same. Our new calculation aims for a bit higher precision though. (For the experts: we’re at a higher order in spin.) The higher precision means that we can actually see the difference: our result changes for two colliding black holes versus two colliding grapefruits.
So does that mean I can tell you what happens when two neutron stars collide, according to our calculation? Actually, no. That’s not because we screwed up the calculation: it’s because some of the properties of neutron stars are unknown.
The Effective Field Theory of neutron stars has what we call “free parameters”, unknown variables. People have tried to estimate some of these (called “Love numbers” after the mathematician A. E. H. Love), but they depend on the details of how neutron stars work: what stuff they contain, how that stuff is shaped, and how it can move. To find them out, we probably can’t just calculate: we’ll have to measure, observe an actual neutron star collision and see what the numbers actually are.
That’s one of the purposes of gravitational wave telescopes. It’s not (as far as I know) something LIGO can measure. But future telescopes, with more precision, should be able to. By watching two colliding neutron stars and comparing to a high-precision calculation, physicists will better understand what those neutron stars are made of. In order to do that, they will need someone to do that high-precision calculation. And that’s why people like me are involved.
I am an “amplitudeologist”. I work on particle physics calculations, computing “scattering amplitudes” to find the probability that fundamental particles bounce off each other. This sounds like the farthest thing possible from black holes. Nevertheless, the two are tightly linked, through the magic of something called Effective Field Theory.
Effective Field Theory is a kind of “zoom knob” for particle physics. You “zoom out” to some chosen scale, and write down a theory that describes physics at that scale. Your theory won’t be a complete description: you’re ignoring everything that’s “too small to see”. It will, however, be an effective description: one that, at the scale you’re interested in, is effectively true.
Particle physicists usually use Effective Field Theory to go between different theories of particle physics, to zoom out from strings to quarks to protons and neutrons. But you can zoom out even further, all the way out to astronomical distances. Zoom out far enough, and even something as massive as a black hole looks like just another particle.
In this picture, the force of gravity between black holes looks like particles (specifically, gravitons) going back and forth. With this picture, physicists can calculate what happens when two black holes collide with each other, making predictions that can be checked with new gravitational wave telescopes like LIGO.
Researchers have pushed this technique quite far. As the calculations get more and more precise (more and more “loops”), they have gotten more and more challenging. This is particularly true when the black holes are spinning, an extra wrinkle in the calculation that adds a surprising amount of complexity.
That’s where I came in. I can’t compete with the experts on black holes, but I certainly know a thing or two about complicated particle physics calculations. Amplitudeologists, like Andrew McLeod and me, have a grab-bag of tricks that make these kinds of calculations a lot easier. With Michèle Levi’s expertise working with spinning black holes in Effective Field Theory, we were able to combine our knowledge to push beyond the state of the art, to a new level of precision.
This project has been quite exciting for me, for a number of reasons. For one, it’s my first time working with gravitons: despite this blog’s name, I’d never published a paper on gravity before. For another, as my brother quipped when he heard about it, this is by far the most “applied” paper I’ve ever written. I mostly work with a theory called N=4 super Yang-Mills, a toy model we use to develop new techniques. This paper isn’t a toy model: the calculation we did should describe black holes out there in the sky, in the real world. There’s a decent chance someone will use this calculation to compare with actual data, from LIGO or a future telescope. That, in particular, is an absurdly exciting prospect.
Because this was such an applied calculation, it was an opportunity to explore the more applied part of my own field. We ended up using well-known techniques from that corner, but I look forward to doing something more inventive in future.
Science communication is a gradual process. Anything we say is incomplete, prone to cause misunderstanding. Luckily, we can keep talking, give a new explanation that corrects those misunderstandings. This of course will lead to new misunderstandings. We then explain again, and so on. It sounds fruitless, but in practice our audience nevertheless gets closer and closer to the truth.
I’ve given this kind of explanation before. And when I do, there are two things people often misunderstand. These correspond to two topics which use very similar language, but talk about different things. So this week, I thought I’d get ahead of the game and correct those misunderstandings.
The first misunderstanding: None of that post was quantum.
If that’s on your mind, and you see me say particles don’t exist, maybe you think I mean waves exist instead. Maybe when I say “fields”, you think I’m talking about waves. Maybe you think I’m choosing one side of the duality, saying that waves exist and particles don’t.
To be 100% clear: I am not saying that.
Particles and waves, in quantum physics, are both manifestations of fields. Is your field just at one specific point? Then it’s a particle. Is it spread out, with a fixed wavelength and frequency? Then it’s a wave. These are the two concepts connected by wave-particle duality, where the same object can behave differently depending on what you measure. And both of them, to be clear, come from fields. Neither is the kind of thing Democritus imagined.
The second misunderstanding: This isn’t about on-shell vs. off-shell.
To again be clear: I’m not arguing with Nima here.
Nima (and other people in our field) will sometimes talk about on-shell vs off-shell as if it was about particles vs. fields. Normal physicists will write down a general field, and let it be off-shell, we try to do calculations with particles that are on-shell. But once again, on-shell doesn’t mean Democritus-style. We still don’t know what a fully on-shell picture of physics will look like. Chances are it won’t look like the picture of sloshing, omnipresent fields we started with, at least not exactly. But it won’t bring back indivisible, unchangeable atoms. Those are gone, and we have no reason to bring them back.