Tag Archives: black hole

To Measure Something or to Test It

Black holes have been in the news a couple times recently.

On one end, there was the observation of an extremely large black hole in the early universe, when no black holes of the kind were expected to exist. My understanding is this is very much a “big if true” kind of claim, something that could have dramatic implications but may just be being misunderstood. At the moment, I’m not going to try to work out which one it is.

In between, you have a piece by me in Quanta Magazine a couple weeks ago, about tests of whether black holes deviate from general relativity. They don’t, by the way, according to the tests so far.

And on the other end, you have the coverage last week of a “confirmation” (or even “proof”) of the black hole area law.

The black hole area law states that the total area of the event horizons of all black holes will always increase. It’s also known as the second law of black hole thermodynamics, paralleling the second law of thermodynamics that entropy always increases. Hawking proved this as a theorem in 1971, assuming that general relativity holds true.

(That leaves out quantum effects, which indeed can make black holes shrink, as Hawking himself famously later argued.)

The black hole area law is supposed to hold even when two black holes collide and merge. While the combination may lose energy (leading to gravitational waves that carry energy to us), it will still have greater area, in the end, than the sum of the black holes that combined to make it.

Ok, so that’s the area law. What’s this paper that’s supposed to “finally prove” it?

The LIGO, Virgo, and KAGRA collaborations recently published a paper based on gravitational waves from one particularly clear collision of black holes, which they measured back in January. They compare their measurements to predictions from general relativity, and checked two things: whether the measurements agreed with predictions based on the Kerr metric (how space-time around a rotating black hole is supposed to behave), and whether they obeyed the area law.

The first check isn’t so different in purpose from the work I wrote about in Quanta Magazine, just using different methods. In both studies, physicists are looking for deviations from the laws of general relativity, triggered by the highly curved environments around black holes. These deviations could show up in one way or another in any black hole collision, so while you would ideally look for them by scanning over many collisions (as the paper I reported on did), you could do a meaningful test even with just one collision. That kind of a check may not be very strenuous (if general relativity is wrong, it’s likely by a very small amount), but it’s still an opportunity, diligently sought, to be proven wrong.

The second check is the one that got the headlines. It also got first billing in the paper title, and a decent amount of verbiage in the paper itself. And if you think about it for more than five minutes, it doesn’t make a ton of sense as presented.

Suppose the black hole area law is wrong, and sometimes black holes lose area when they collide. Even if this happened sometimes, you wouldn’t expect it to happen every time. It’s not like anyone is pondering a reverse black hole area law, where black holes only shrink!

Because of that, I think it’s better to say that LIGO measured the black hole area law for this collision, while they tested whether black holes obey the Kerr metric. In one case, they’re just observing what happened in this one situation. In the other, they can try to draw implications for other collisions.

That doesn’t mean their work wasn’t impressive, but it was impressive for reasons that don’t seem to be getting emphasized. It’s impressive because, prior to this paper, they had not managed to measure the areas of colliding black holes well enough to confirm that they obeyed the area law! The previous collisions looked like they obeyed the law, but when you factor in the experimental error they couldn’t say it with confidence. The current measurement is better, and can. So the new measurement is interesting not because it confirms a fundamental law of the universe or anything like that…it’s interesting because previous measurements were so bad, that they couldn’t even confirm this kind of fundamental law!

That, incidentally, feels like a “missing mood” in pop science. Some things are impressive not because of their amazing scale or awesome implications, but because they are unexpectedly, unintuitively, really really hard to do. These measurements shouldn’t be thought of, or billed, as tests of nature’s fundamental laws. Instead they’re interesting because they highlight what we’re capable of, and what we still need to accomplish.

Amplitudes 2025 This Week

Summer is conference season for academics, and this week held my old sub-field’s big yearly conference, called Amplitudes. This year, it was in Seoul at Seoul National University, the first time the conference has been in Asia.

(I wasn’t there, I don’t go to these anymore. But I’ve been skimming slides in my free time, to give you folks the updates you crave. Be forewarned that conference posts like these get technical fast, I’ll be back to my usual accessible self next week.)

There isn’t a huge amplitudes community in Korea, but it’s bigger than it was back when I got started in the field. Of the organizers, Kanghoon Lee of the Asia Pacific Center for Theoretical Physics and Sangmin Lee of Seoul National University have what I think of as “core amplitudes interests”, like recursion relations and the double-copy. The other Korean organizers are from adjacent areas, work that overlaps with amplitudes but doesn’t show up at the conference each year. There was also a sizeable group of organizers from Taiwan, where there has been a significant amplitudes presence for some time now. I do wonder if Korea was chosen as a compromise between a conference hosted in Taiwan or in mainland China, where there is also quite a substantial amplitudes community.

One thing that impresses me every year is how big, and how sophisticated, the gravitational-wave community in amplitudes has grown. Federico Buccioni’s talk began with a plot that illustrates this well (though that wasn’t his goal):

At the conference Amplitudes, dedicated to the topic of scattering amplitudes, there were almost as many talks with the phrase “black hole” in the title as there were with “scattering” or “amplitudes”! This is for a topic that did not even exist in the subfield when I got my PhD eleven years ago.

With that said, gravitational wave astronomy wasn’t quite as dominant at the conference as Buccioni’s bar chart suggests. There were a few talks each day on the topic: I counted seven in total, excluding any short talks on the subject in the gong show. Spinning black holes were a significant focus, central to Jung-Wook Kim’s, Andres Luna’s and Mao Zeng’s talks (the latter two showing some interesting links between the amplitudes story and classic ideas in classical mechanics) and relevant in several others, with Riccardo Gonzo, Miguel Correia, Ira Rothstein, and Enrico Herrmann’s talks showing not just a wide range of approaches, but an increasing depth of research in this area.

Herrmann’s talk in particular dealt with detector event shapes, a framework that lets physicists think more directly about what a specific particle detector or observer can see. He applied the idea not just to gravitational waves but to quantum gravity and collider physics as well. The latter is historically where this idea has been applied the most thoroughly, as highlighted in Hua Xing Zhu’s talk, where he used them to pick out particular phenomena of interest in QCD.

QCD is, of course, always of interest in the amplitudes field. Buccioni’s talk dealt with the theory’s behavior at high-energies, with a nice example of the “maximal transcendentality principle” where some quantities in QCD are identical to quantities in N=4 super Yang-Mills in the “most transcendental” pieces (loosely, those with the highest powers of pi). Andrea Guerreri’s talk also dealt with high-energy behavior in QCD, trying to address an experimental puzzle where QCD results appeared to violate a fundamental bound all sensible theories were expected to obey. By using S-matrix bootstrap techniques, they clarify the nature of the bound, finding that QCD still obeys it once correctly understood, and conjecture a weird theory that should be possible to frame right on the edge of the bound. The S-matrix bootstrap was also used by Alexandre Homrich, who talked about getting the framework to work for multi-particle scattering.

Heribertus Bayu Hartanto is another recent addition to Korea’s amplitudes community. He talked about a concrete calculation, two-loop five-particle scattering including top quarks, a tricky case that includes elliptic curves.

When amplitudes lead to integrals involving elliptic curves, many standard methods fail. Jake Bourjaily’s talk raised a question he has brought up again and again: what does it mean to do an integral for a new type of function? One possible answer is that it depends on what kind of numerics you can do, and since more general numerical methods can be cumbersome one often needs to understand the new type of function in more detail. In light of that, Stephen Jones’ talk was interesting in taking a common problem often cited with generic approaches (that they have trouble with the complex numbers introduced by Minkowski space) and finding a more natural way in a particular generic approach (sector decomposition) to take them into account. Giulio Salvatori talked about a much less conventional numerical method, linked to the latest trend in Nima-ology, surfaceology. One of the big selling points of the surface integral framework promoted by people like Salvatori and Nima Arkani-Hamed is that it’s supposed to give a clear integral to do for each scattering amplitude, one which should be amenable to a numerical treatment recently developed by Michael Borinsky. Salvatori can currently apply the method only to a toy model (up to ten loops!), but he has some ideas for how to generalize it, which will require handling divergences and numerators.

Other approaches to the “problem of integration” included Anna-Laura Sattelberger’s talk that presented a method to find differential equations for the kind of integrals that show up in amplitudes using the mathematical software Macaulay2, including presenting a package. Matthias Wilhelm talked about the work I did with him, using machine learning to find better methods for solving integrals with integration-by-parts, an area where two other groups have now also published. Pierpaolo Mastrolia talked about integration-by-parts’ up-and-coming contender, intersection theory, a method which appears to be delving into more mathematical tools in an effort to catch up with its competitor.

Sometimes, one is more specifically interested in the singularities of integrals than their numerics more generally. Felix Tellander talked about a geometric method to pin these down which largely went over my head, but he did have a very nice short description of the approach: “Describe the singularities of the integrand. Find a map representing integration. Map the singularities of the integrand onto the singularities of the integral.”

While QCD and gravity are the applications of choice, amplitudes methods germinate in N=4 super Yang-Mills. Ruth Britto’s talk opened the conference with an overview of progress along those lines before going into her own recent work with one-loop integrals and interesting implications of ideas from cluster algebras. Cluster algebras made appearances in several other talks, including Anastasia Volovich’s talk which discussed how ideas from that corner called flag cluster algebras may give insights into QCD amplitudes, though some symbol letters still seem to be hard to track down. Matteo Parisi covered another idea, cluster promotion maps, which he thinks may help pin down algebraic symbol letters.

The link between cluster algebras and symbol letters is an ongoing mystery where the field is seeing progress. Another symbol letter mystery is antipodal duality, where flipping an amplitude like a palindrome somehow gives another valid amplitude. Lance Dixon has made progress in understanding where this duality comes from, finding a toy model where it can be understood and proved.

Others pushed the boundaries of methods specific to N=4 super Yang-Mills, looking for novel structures. Song He’s talk pushes an older approach by Bourjaily and collaborators up to twelve loops, finding new patterns and connections to other theories and observables. Qinglin Yang bootstraps Wilson loops with a Lagrangian insertion, adding a side to the polygon used in previous efforts and finding that, much like when you add particles to amplitudes in a bootstrap, the method gets stricter and more powerful. Jaroslav Trnka talked about work he has been doing with “negative geometries”, an odd method descended from the amplituhedron that looks at amplitudes from a totally different perspective, probing a bit of their non-perturbative data. He’s finding more parts of that setup that can be accessed and re-summed, finding interestingly that multiple-zeta-values show up in quantities where we know they ultimately cancel out. Livia Ferro also talked about a descendant of the amplituhedron, this time for cosmology, getting differential equations for cosmological observables in a particular theory from a combinatorial approach.

Outside of everybody’s favorite theories, some speakers talked about more general approaches to understanding the differences between theories. Andreas Helset covered work on the geometry of the space of quantum fields in a theory, applying the method to a general framework for characterizing deviations from the standard model called the SMEFT. Jasper Roosmale Nepveu also talked about a general space of theories, thinking about how positivity (a trait linked to fundamental constraints like causality and unitarity) gets tangled up with loop effects, and the implications this has for renormalization.

Soft theorems, universal behavior of amplitudes when a particle has low energy, continue to be a trendy topic, with Silvia Nagy showing how the story continues to higher orders and Sangmin Choi investigating loop effects. Callum Jones talks about one of the more powerful results from the soft limit, Weinberg’s theorem showing the uniqueness of gravity. Weinberg’s proof was set up in Minkowski space, but we may ultimately live in curved, de Sitter space. Jones showed how the ideas Weinberg explored generalize in de Sitter, using some tools from the soft-theorem-inspired field of dS/CFT. Julio Parra-Martinez, meanwhile, tied soft theorems to another trendy topic, higher symmetries, a more general notion of the usual types of symmetries that physicists have explored in the past. Lucia Cordova reported work that was not particularly connected to soft theorems but was connected to these higher symmetries, showing how they interact with crossing symmetry and the S-matrix bootstrap.

Finally, a surprisingly large number of talks linked to Kevin Costello and Natalie Paquette’s work with self-dual gauge theories, where they found exact solutions from a fairly mathy angle. Paquette gave an update on her work on the topic, while Alfredo Guevara talked about applications to black holes, comparing the power of expanding around a self-dual gauge theory to that of working with supersymmetry. Atul Sharma looked at scattering in self-dual backgrounds in work that merges older twistor space ideas with the new approach, while Roland Bittelson talked about calculating around an instanton background.


Also, I had another piece up this week at FirstPrinciples, based on an interview with the (outgoing) president of the Sloan Foundation. I won’t have a “bonus info” post for this one, as most of what I learned went into the piece. But if you don’t know what the Sloan Foundation does, take a look! I hadn’t known they funded Jupyter notebooks and Hidden Figures, or that they introduced Kahneman and Tversky.

How Small Scales Can Matter for Large Scales

For a certain type of physicist, nothing matters more than finding the ultimate laws of nature for its tiniest building-blocks, the rules that govern quantum gravity and tell us where the other laws of physics come from. But because they know very little about those laws at this point, they can predict almost nothing about observations on the larger distance scales we can actually measure.

“Almost nothing” isn’t nothing, though. Theoretical physicists don’t know nature’s ultimate laws. But some things about them can be reasonably guessed. The ultimate laws should include a theory of quantum gravity. They should explain at least some of what we see in particle physics now, explaining why different particles have different masses in terms of a simpler theory. And they should “make sense”, respecting cause and effect, the laws of probability, and Einstein’s overall picture of space and time.

All of these are assumptions, of course. Further assumptions are needed to derive any testable consequences from them. But a few communities in theoretical physics are willing to take the plunge, and see what consequences their assumptions have.

First, there’s the Swampland. String theorists posit that the world has extra dimensions, which can be curled up in a variety of ways to hide from view, with different observable consequences depending on how the dimensions are curled up. This list of different observable consequences is referred to as the Landscape of possibilities. Based on that, some string theorists coined the term “Swampland” to represent an area outside the Landscape, containing observations that are incompatible with quantum gravity altogether, and tried to figure out what those observations would be.

In principle, the Swampland includes the work of all the other communities on this list, since a theory of quantum gravity ought to be consistent with other principles as well. In practice, people who use the term focus on consequences of gravity in particular. The earliest such ideas argued from thought experiments with black holes, finding results that seemed to demand that gravity be the weakest force for at least one type of particle. Later researchers would more frequently use string theory as an example, looking at what kinds of constructions people had been able to make in the Landscape to guess what might lie outside of it. They’ve used this to argue that dark energy might be temporary, and to try to figure out what traits new particles might have.

Second, I should mention naturalness. When talking about naturalness, people often use the analogy of a pen balanced on its tip. While possible in principle, it must have been set up almost perfectly, since any small imbalance would cause it to topple, and that perfection demands an explanation. Similarly, in particle physics, things like the mass of the Higgs boson and the strength of dark energy seem to be carefully balanced, so that a small change in how they were set up would lead to a much heavier Higgs boson or much stronger dark energy. The need for an explanation for the Higgs’ careful balance is why many physicists expected the Large Hadron Collider to discover additional new particles.

As I’ve argued before, this kind of argument rests on assumptions about the fundamental laws of physics. It assumes that the fundamental laws explain the mass of the Higgs, not merely by giving it an arbitrary number but by showing how that number comes from a non-arbitrary physical process. It also assumes that we understand well how physical processes like that work, and what kinds of numbers they can give. That’s why I think of naturalness as a type of argument, much like the Swampland, that uses the smallest scales to constrain larger ones.

Third is a host of constraints that usually go together: causality, unitarity, and positivity. Causality comes from cause and effect in a relativistic universe. Because two distant events can appear to happen in different orders depending on how fast you’re going, any way to send signals faster than light is also a way to send signals back in time, causing all of the paradoxes familiar from science fiction. Unitarity comes from quantum mechanics. If quantum calculations are supposed to give the probability of things happening, those probabilities should make sense as probabilities: for example, they should never go above one.

You might guess that almost any theory would satisfy these constraints. But if you extend a theory to the smallest scales, some theories that otherwise seem sensible end up failing this test. Actually linking things up takes other conjectures about the mathematical form theories can have, conjectures that seem more solid than the ones underlying Swampland and naturalness constraints but that still can’t be conclusively proven. If you trust the conjectures, you can derive restrictions, often called positivity constraints when they demand that some set of observations is positive. There has been a renaissance in this kind of research over the last few years, including arguments that certain speculative theories of gravity can’t actually work.

A Tale of Two Experiments

Before I begin, two small announcements:

First: I am now on bluesky! Instead of having a separate link in the top menu for each social media account, I’ve changed the format so now there are social media buttons in the right-hand sidebar, right under the “Follow” button. Currently, they cover tumblr, twitter, and bluesky, but there may be more in future.

Second, I’ve put a bit more technical advice on my “Open Source Grant Proposal” post, so people interested in proposing similar research can have some ideas about how best to pitch it.

Now, on to the post:


Gravitational wave telescopes are possibly the most exciting research program in physics right now. Big, expensive machines with more on the way in the coming decades, gravitational wave telescopes need both precise theoretical predictions and high-quality data analysis. For some, gravitational wave telescopes have the potential to reveal genuinely new physics, to probe deviations from general relativity that might be related to phenomena like dark matter, though so far no such deviations have been conclusively observed. In the meantime, they’re teaching us new consequences of known physics. For example, the unusual population of black holes observed by LIGO has motivated those who model star clusters to consider processes in which the motion of three stars or black holes is related to each other, discovering that these processes are more important than expected.

Particle colliders are probably still exciting to the general public, but for many there is a growing sense of fatigue and disillusionment. Current machines like the LHC are big and expensive, and proposed future colliders would be even costlier and take decades to come online, in addition to requiring a huge amount of effort from the community in terms of precise theoretical predictions and data analysis. Some argue that colliders still might uncover genuinely new physics, deviations from the standard model that might explain phenomena like dark matter, but as no such deviations have yet been conclusively observed people are increasingly skeptical. In the meantime, most people working on collider physics are focused on learning new consequences of known physics. For example, by comparing observed results with theoretical approximations, people have found that certain high-energy processes usually left out of calculations are actually needed to get a good agreement with the data, showing that these processes are more important than expected.

…ok, you see what I did there, right? Was that fair?

There are a few key differences, with implications to keep in mind:

First, collider physics is significantly more expensive than gravitational wave physics. LIGO took about $300 million to build and spends about $50 million a year. The LHC took about $5 billion to build and costs $1 billion a year to run. That cost still puts both well below several other government expenses that you probably consider frivolous (please don’t start arguing about which ones in the comments!), but it does mean collider physics demands a bit of a stronger argument.

Second, the theoretical motivation to expect new fundamental physics out of LIGO is generally considered much weaker than for colliders. A large part of the theoretical physics community thought that they had a good argument why they should see something new at the LHC. In contrast, most theorists have been skeptical of the kinds of modified gravity theories that have dramatic enough effects that one could measure them with gravitational wave telescopes, with many of these theories having other pathologies or inconsistencies that made people wary.

Third, the general public finds astrophysics cooler than particle physics. Somehow, telling people “pairs of black holes collide more often than we thought because sometimes a third star in the neighborhood nudges them together” gets people much more excited than “pairs of quarks collide more often than we thought because we need to re-sum large logarithms differently”, even though I don’t think there’s a real “principled” difference between them. Neither reveals new laws of nature, both are upgrades to our ability to model how real physical objects behave, neither is useful to know for anybody living on Earth in the present day.

With all this in mind, my advice to gravitational wave physicists is to try, as much as possible, not to lean on stories about dark matter and modified gravity. You might learn something, and it’s worth occasionally mentioning that. But if you don’t, you run a serious risk of disappointing people. And you have such a big PR advantage if you just lean on new consequences of bog standard GR, that those guys really should get the bulk of the news coverage if you want to keep the public on your side.

The Machine Learning for Physics Recipe

Last week, I went to a conference on machine learning for physics. Machine learning covers a huge variety of methods and ideas, several of which were on full display. But again and again, I noticed a pattern. The people who seemed to be making the best use of machine learning, the ones who were the most confident in their conclusions and getting the most impressive results, the ones who felt like they had a whole assembly line instead of just a prototype, all of them were doing essentially the same thing.

This post is about that thing. If you want to do machine learning in physics, these are the situations where you’re most likely to see a benefit. You can do other things, and they may work too. But this recipe seems to work over and over again.

First, you need simulations, and you need an experiment.

Your experiment gives you data, and that data isn’t easy to interpret. Maybe you’ve embedded a bunch of cameras in the antarctic ice, and your data tells you when they trigger and how bright the light is. Maybe you’ve surrounded a particle collision with layers silicon, and your data tells you how much electric charge the different layers absorb. Maybe you’ve got an array of telescopes focused on a black hole far far away, and your data are pixels gathered from each telescope.

You want to infer, from your data, what happened physically. Your cameras in the ice saw signs of a neutrino, you want to know how much energy it had and where it was coming from. Your silicon is absorbing particles, what kind are they and what processes did they come from? The black hole might have the rings predicted by general relativity, but it might have weirder rings from a variant theory.

In each case, you can’t just calculate the answer you need. The neutrino streams past, interacting with the ice and camera positions in unpredictable ways. People can write down clean approximations for particles in the highest-energy part of a collision, but once they start cooling down the process becomes so messy that no straightforward formula describes them. Your array of telescopes fuzz and pixellate and have to be assembled together in a complicated way, so that there is no one guaranteed answer you can find to establish what they saw.

In each case, though, you can use simulations. If you specify in advance the energy and path of the neutrino, you can use a computer to predict how much light your cameras should see. If you know what particles you started with, you can run sophisticated particle physics code to see what “showers” of particles you eventually find. If you have the original black hole image, you can fuzz and pixellate and take it apart to match what your array of telescopes will do.

The problem is, for the experiments, you can’t anticipate, and you don’t know in advance. And simulations, while cheaper than experiments, aren’t cheap. You can’t run a simulation for every possible input and then check them against the experiments. You need to fill in the gaps, run some simulations and then use some theory, some statistical method or human-tweaked guess, to figure out how to interpret your experiments.

Or, you can use Machine Learning. You train a machine learning model, one well-suited the task (anything from the old standby of boosted decision trees to an old fad of normalizing flows to the latest hotness of graph neural networks). You run a bunch of simulations, as many as you can reasonably afford, and you use that data for training, making a program that matches the input data you want to find with its simulated results. This program will be less reliable than your simulations, but it will run much faster. If it’s reliable enough, you can use it instead of the old human-made guesses and tweaks. You now have an efficient, reliable way to go from your raw experiment data to the physical questions you actually care about.

Crucially, each of the elements in this recipe is essential.

You need a simulation. If you just have an experiment with no simulation, then you don’t have a way to interpret the results, and training a machine to reproduce the experiment won’t tell you anything new.

You need an experiment. If you just have simulations, training a machine to reproduce them also doesn’t tell you anything new. You need some reason to want to predict the results of the simulations, beyond just seeing what happens in between which the machine can’t tell you.

And you need to not have anything better than the simulation. If you have a theory where you can write out formulas for what happens then you don’t need machine learning, you can interpret the experiments more easily without it. This applies if you’ve carefully designed your experiment to measure something easy to interpret, like the ratio of rates of two processes that should be exactly the same.

These aren’t the only things you need. You also need to do the whole thing carefully enough that you understand well your uncertainties, not just what the machine predicts but how often it gets it wrong, and whether it’s likely to do something strange when you use it on the actual experiment. But if you can do that, you have a reliable recipe, one many people have followed successfully before. You have a good chance of making things work.

This isn’t the only way physicists can use machine learning. There are people looking into something more akin to what’s called unsupervised learning, where you look for strange events in your data as clues for what to investigate further. And there are people like me, trying to use machine learning on the mathematical side, to guess new formulas and new heuristics. There is likely promise in many of these approaches. But for now, they aren’t a recipe.

Amplitudes 2024, Continued

I’ve now had time to look over the rest of the slides from the Amplitudes 2024 conference, so I can say something about Thursday and Friday’s talks.

Thursday was gravity-focused. Zvi Bern’s review talk was actually a review, a tour of the state of the art in using amplitudes techniques to make predictions for gravitational wave physics. Bern emphasized that future experiments will require much more precision: two more orders of magnitude, which in our lingo amounts to two more “loops”. The current state of the art is three loops, but they’ve been hacking away at four, doing things piece by piece in a way that cleverly also yields publications (for example, they can do just the integrals needed for supergravity, which are simpler). Four loops here is the first time that the Feynman diagrams involve Calabi-Yau manifolds, so they will likely need techniques from some of the folks I talked about last week. Once they have four loops, they’ll want to go to five, since that is the level of precision you need to learn something about the material in neutron stars. The talk covered a variety of other developments, some of which were talked about later on Thursday and some of which were only mentioned here.

Of that day’s other speakers, Stefano De Angelis, Lucile Cangemi, Mikhail Ivanov, and Alessandra Buonanno also focused on gravitational waves. De Angelis talked about the subtleties that show up when you try to calculate gravitational waveforms directly with amplitudes methods, showcasing various improvements to the pipeline there. Cangemi talked about a recurring question with its own list of subtleties, namely how the Kerr metric for spinning black holes emerges from the math of amplitudes of spinning particles. Gravitational waves were the focus of only the second half of Ivanov’s talk, where he talked about how amplitudes methods can clear up some of the subtler effects people try to take into account. The first half was about another gravitational application, that of using amplitudes methods to compute the correlations of galaxy structures in the sky, a field where it looks like a lot of progress can be made. Finally, Buonanno gave the kind of talk she’s given a few times at these conferences, a talk that puts these methods in context, explaining how amplitudes results are packaged with other types of calculations into the Effective-One-Body framework which then is more directly used at LIGO. This year’s talk went into more detail about what the predictions are actually used for, which I appreciated. I hadn’t realized that there have been a handful of black hole collisions discovered by other groups from LIGO’s data, a win for open science! Her slides had a nice diagram explaining what data from the gravitational wave is used to infer what black hole properties, quite a bit more organized than the statistical template-matching I was imagining. She explained the logic behind Bern’s statement that gravitational wave telescopes will need two more orders of magnitude, pointing out that that kind of precision is necessary to be sure that something that might appear to be a deviation from Einstein’s theory of gravity is not actually a subtle effect of known physics. Her method typically is adjusted to fit numerical simulations, but she shows that even without that adjustment they now fit the numerics quite well, thanks in part to contributions from amplitudes calculations.

Of the other talks that day, David Kosower’s was the only one that didn’t explicitly involve gravity. Instead, his talk focused on a more general question, namely how to find a well-defined basis of integrals for Feynman diagrams, which turns out to involve some rather subtle mathematics and geometry. This is a topic that my former boss Jake Bourjaily worked on in a different context for some time, and I’m curious whether there is any connection between the two approaches. Oliver Schlotterer gave the day’s second review talk, once again of the “actually a review” kind, covering a variety of recent developments in string theory amplitudes. These include some new pictures of how string theory amplitudes that correspond to Yang-Mills theories “square” to amplitudes involving gravity at higher loops and progress towards going past two loops, the current state of the art for most string amplitude calculations. (For the experts: this does not involve taking the final integral over the moduli space, which is still a big unsolved problem.) He also talked about progress by Sebastian Mizera and collaborators in understanding how the integrals that show up in string theory make sense in the complex plane. This is a problem that people had mostly managed to avoid dealing with because of certain simplifications in the calculations people typically did (no moduli space integration, expansion in the string length), but taking things seriously means confronting it, and Mizera and collaborators found a novel solution to the problem that has already passed a lot of checks. Finally, Tobias Hansen’s talk also related to string theory, specifically in anti-de-Sitter space, where the duality between string theory and N=4 super Yang-Mills lets him and his collaborators do Yang-Mills calculations and see markedly stringy-looking behavior.

Friday began with Kevin Costello, whose not-really-a-review talk dealt with his work with Natalie Paquette showing that one can use an exactly-solvable system to learn something about QCD. This only works for certain rather specific combinations of particles: for example, in order to have three colors of quarks, they need to do the calculation for nine flavors. Still, they managed to do a calculation with this method that had not previously been done with more traditional means, and to me it’s impressive that anything like this works for a theory without supersymmetry. Mina Himwich and Diksha Jain both had talks related to a topic of current interest, “celestial” conformal field theory, a picture that tries to apply ideas from holography in which a theory on the boundary of a space fully describes the interior, to the “boundary” of flat space, infinitely far away. Himwich talked about a symmetry observed in that research program, and how that symmetry can be seen using more normal methods, which also lead to some suggestions of how the idea might be generalized. Jain likewise covered a different approach, one in which one sets artificial boundaries in flat space and sees what happens when those boundaries move.

Yifei He described progress in the modern S-matrix bootstrap approach. Previously, this approach had gotten quite general constraints on amplitudes. She tries to do something more specific, and predict the S-matrix for scattering of pions in the real world. By imposing compatibility with knowledge from low energies and high energies, she was able to find a much more restricted space of consistent S-matrices, and these turn out to actually match pretty well to experimental results. Mathieu Giroux addresses an important question for a variety of parts of amplitudes research, how to predict the singularities of Feynman diagrams. He explored a recursive approach to solving Landau’s equations for these singularities, one which seems impressively powerful, in one case being able to find a solution that in text form is approximately the length of Harry Potter. Finally, Juan Maldacena closed the conference by talking about some progress he’s made towards an old idea, that of defining M theory in terms of a theory involving actual matrices. This is a very challenging thing to do, but he is at least able to tackle the simplest possible case, involving correlations between three observations. This had a known answer, so his work serves mostly as a confirmation that the original idea makes sense at at least this level.

LHC Black Hole Reassurance: The Professional Version

A while back I wrote a post trying to reassure you that the Large Hadron Collider cannot create a black hole that could destroy the Earth. If you’re the kind of person who is worried about this kind of thing, you’ve probably heard a variety of arguments: that it hasn’t happened yet, despite the LHC running for quite some time, that it didn’t happen before the LHC with cosmic rays of comparable energy, and that a black hole that small would quickly decay due to Hawking radiation. I thought it would be nice to give a different sort of argument, a back-of-the-envelope calculation you can try out yourself, showing that even if a black hole was produced using all of the LHC’s energy and fell directly into the center of the Earth, and even if Hawking radiation didn’t exist, it would still take longer than the lifetime of the universe to cause any detectable damage. Modeling the black hole as falling through the Earth and just slurping up everything that falls into its event horizon, it wouldn’t even double in size before the stars burn out.

That calculation was extremely simple by physics standards. As it turns out, it was too simple. A friend of mine started thinking harder about the problem, and dug up this paper from 2008: Astrophysical implications of hypothetical stable TeV-scale black holes.

Before the LHC even turned on, the experts were hard at work studying precisely this question. The paper has two authors, Steve Giddings and Michelangelo Mangano. Giddings is an expert on the problem of quantum gravity, while Mangano is an expert on LHC physics, so the two are exactly the dream team you’d ask for to answer this question. Like me, they pretend that black holes don’t decay due to Hawking radiation, and pretend that one falls to straight from the LHC to the center of the Earth, for the most pessimistic possible scenario.

Unlike me, but like my friend, they point out that the Earth is not actually a uniform sphere of matter. It’s made up of particles: quarks arranged into nucleons arranged into nuclei arranged into atoms. And a black hole that hits a nucleus will probably not just slurp up an event horizon-sized chunk of the nucleus: it will slurp up the whole nucleus.

This in turn means that the black hole starts out growing much more fast. Eventually, it slows down again: once it’s bigger than an atom, it starts gobbling up atoms a few at a time until eventually it is back to slurping up a cylinder of the Earth’s material as it passes through.

But an atom-sized black hole will grow faster than an LHC-energy-sized black hole. How much faster is estimated in the Giddings and Mangano paper, and it depends on the number of dimensions. For eight dimensions, we’re safe. For fewer, they need new arguments.

Wait a minute, you might ask, aren’t there only four dimensions? Is this some string theory nonsense?

Kind of, yes. In order for the LHC to produce black holes, gravity would need to have a much stronger effect than we expect on subatomic particles. That requires something weird, and the most plausible such weirdness people considered at the time were extra dimensions. With extra dimensions of the right size, the LHC might have produced black holes. It’s that kind of scenario that Giddings and Mangano are checking: they don’t know of a plausible way for black holes to be produced at the LHC if there are just four dimensions.

For fewer than eight dimensions, though, they have a problem: the back-of-the-envelope calculation suggests black holes could actually grow fast enough to cause real damage. Here, they fall back on the other type of argument: if this could happen, would it have happened already? They argue that, if the LHC could produce black holes in this way, then cosmic rays could produce black holes when they hit super-dense astronomical objects, such as white dwarfs and neutron stars. Those black holes would eat up the white dwarfs and neutron stars, in the same way one might be worried they could eat up the Earth. But we can observe that white dwarfs and neutron stars do in fact exist, and typically live much longer than they would if they were constantly being eaten by miniature black holes. So we can conclude that any black holes like this don’t exist, and we’re safe.

If you’ve got a smattering of physics knowledge, I encourage you to read through the paper. They consider a lot of different scenarios, much more than I can summarize in a post. I don’t know if you’ll find it reassuring, since they may not cover whatever you happen to be worried about. But it’s a lot of fun seeing how the experts handle the problem.

Amplitudes 2023 Retrospective

I’m back from CERN this week, with a bit more time to write, so I thought I’d share some thoughts about last week’s Amplitudes conference.

One thing I got wrong in last week’s post: I’ve now been told only 213 people actually showed up in person, as opposed to the 250-ish estimate I had last week. This may seem fewer than Amplitudes in Prague had, but it seems likely they had a few fewer show up than appeared on the website. Overall, the field is at least holding steady from year to year, and definitely has grown since the pandemic (when 2019’s 175 was already a very big attendance).

It was cool having a conference in CERN proper, surrounded by the history of European particle physics. The lecture hall had an abstract particle collision carved into the wood, and the visitor center would in principle have had Standard Model coffee mugs were they not sold out until next May. (There was still enough other particle physics swag, Swiss chocolate, and Swiss chocolate that was also particle physics swag.) I’d planned to stay on-site at the CERN hostel, but I ended up appreciated not doing that: the folks who did seemed to end up a bit cooped up by the end of the conference, even with the conference dinner as a chance to get out.

Past Amplitudes conferences have had associated public lectures. This time we had a not-supposed-to-be-public lecture, a discussion between Nima Arkani-Hamed and Beate Heinemann about the future of particle physics. Nima, prominent as an amplitudeologist, also has a long track record of reasoning about what might lie beyond the Standard Model. Beate Heinemann is an experimentalist, one who has risen through the ranks of a variety of different particle physics experiments, ending up well-positioned to take a broad view of all of them.

It would have been fun if the discussion erupted into an argument, but despite some attempts at provocative questions from the audience that was not going to happen, as Beate and Nima have been friends for a long time. Instead, they exchanged perspectives: on what’s coming up experimentally, and what theories could explain it. Both argued that it was best to have many different directions, a variety of experiments covering a variety of approaches. (There wasn’t any evangelism for particular experiments, besides a joking sotto voce mention of a muon collider.) Nima in particular advocated that, whether theorist or experimentalist, you have to have some belief that what you’re doing could lead to a huge breakthrough. If you think of yourself as just a “foot soldier”, covering one set of checks among many, then you’ll lose motivation. I think Nima would agree that this optimism is irrational, but necessary, sort of like how one hears (maybe inaccurately) that most new businesses fail, but someone still needs to start businesses.

Michelangelo Mangano’s talk on Thursday covered similar ground, but with different emphasis. He agrees that there are still things out there worth discovering: that our current model of the Higgs, for instance, is in some ways just a guess: a simplest-possible answer that doesn’t explain as much as we’d like. But he also emphasized that Standard Model physics can be “new physics” too. Just because we know the model doesn’t mean we can calculate its consequences, and there are a wealth of results from the LHC that improve our models of protons, nuclei, and the types of physical situations they partake in, without changing the Standard Model.

We saw an impressive example of this in Gregory Korchemsky’s talk on Wednesday. He presented an experimental mystery, an odd behavior in the correlation of energies of jets of particles at the LHC. These jets can include a very large number of particles, enough to make it very hard to understand them from first principles. Instead, Korchemsky tried out our field’s favorite toy model, where such calculations are easier. By modeling the situation in the limit of a very large number of particles, he was able to reproduce the behavior of the experiment. The result was a reminder of what particle physics was like before the Standard Model, and what it might become again: partial models to explain odd observations, a quest to use the tools of physics to understand things we can’t just a priori compute.

On the other hand, amplitudes does do a priori computations pretty well as well. Fabrizio Caola’s talk opened the conference by reminding us just how much our precise calculations can do. He pointed out that the LHC has only gathered 5% of its planned data, and already it is able to rule out certain types of new physics to fairly high energies (by ruling out indirect effects, that would show up in high-precision calculations). One of those precise calculations featured in the next talk, by Guilio Gambuti. (A FORM user, his diagrams were the basis for the header image of my Quanta article last winter.) Tiziano Peraro followed up with a technique meant to speed up these kinds of calculations, a trick to simplify one of the more computationally intensive steps in intersection theory.

The rest of Monday was more mathematical, with talks by Zeno Capatti, Jaroslav Trnka, Chia-Kai Kuo, Anastasia Volovich, Francis Brown, Michael Borinsky, and Anna-Laura Sattelberger. Borinksy’s talk felt the most practical, a refinement of his numerical methods complete with some actual claims about computational efficiency. Francis Brown discussed an impressively powerful result, a set of formulas that manages to unite a variety of invariants of Feynman diagrams under a shared explanation.

Tuesday began with what I might call “visitors”: people from adjacent fields with an interest in amplitudes. Alday described how the duality between string theory in AdS space and super Yang-Mills on the boundary can be used to get quite concrete information about string theory, calculating how the theory’s amplitudes are corrected by the curvature of AdS space using a kind of “bootstrap” method that felt nicely familiar. Tim Cohen talked about a kind of geometric picture of theories that extend the Standard Model, including an interesting discussion of whether it’s really “geometric”. Marko Simonovic explained how the integration techniques we develop in scattering amplitudes can also be relevant in cosmology, especially for the next generation of “sky mappers” like the Euclid telescope. This talk was especially interesting to me since this sort of cosmology has a significant presence at CEA Paris-Saclay. Along those lines an interesting paper, “Cosmology meets cohomology”, showed up during the conference. I haven’t had a chance to read it yet!

Just before lunch, we had David Broadhurst give one of his inimitable talks, complete with number theory, extremely precise numerics, and literary and historical references (apparently, Källén died flying his own plane). He also remedied a gap in our whimsically biological diagram naming conventions, renaming the pedestrian “double-box” as a (in this context, Orwellian) lobster. Karol Kampf described unusual structures in a particular Effective Field Theory, while Henriette Elvang’s talk addressed what would become a meaningful subtheme of the conference, where methods from the mathematical field of optimization help amplitudes researchers constrain the space of possible theories. Giulia Isabella covered another topic on this theme later in the day, though one of her group’s selling points is managing to avoid quite so heavy-duty computations.

The other three talks on Tuesday dealt with amplitudes techniques for gravitational wave calculations, as did the first talk on Wednesday. Several of the calculations only dealt with scattering black holes, instead of colliding ones. While some of the results can be used indirectly to understand the colliding case too, a method to directly calculate behavior of colliding black holes came up again and again as an important missing piece.

The talks on Wednesday had to start late, owing to a rather bizarre power outage (the lights in the room worked fine, but not the projector). Since Wednesday was the free afternoon (home of quickly sold-out CERN tours), this meant there were only three talks: Veneziano’s talk on gravitational scattering, Korchemsky’s talk, and Nima’s talk. Nima famously never finishes on time, and this time attempted to control his timing via the surprising method of presenting, rather than one topic, five “abstracts” on recent work that he had not yet published. Even more surprisingly, this almost worked, and he didn’t run too ridiculously over time, while still managing to hint at a variety of ways that the combinatorial lessons behind the amplituhedron are gradually yielding useful perspectives on more general realistic theories.

Thursday, Andrea Puhm began with a survey of celestial amplitudes, a topic that tries to build the same sort of powerful duality used in AdS/CFT but for flat space instead. They’re gradually tackling the weird, sort-of-theory they find on the boundary of flat space. The two next talks, by Lorenz Eberhardt and Hofie Hannesdottir, shared a collaborator in common, namely Sebastian Mizera. They also shared a common theme, taking a problem most people would have assumed was solved and showing that approaching it carefully reveals extensive structure and new insights.

Cristian Vergu, in turn, delved deep into the literature to build up a novel and unusual integration method. We’ve chatted quite a bit about it at the Niels Bohr Institute, so it was nice to see it get some attention on the big stage. We then had an afternoon of trips beyond polylogarithms, with talks by Anne Spiering, Christoph Nega, and Martijn Hidding, each pushing the boundaries of what we can do with our hardest-to-understand integrals. Einan Gardi and Ruth Britto finished the day, with a deeper understanding of the behavior of high-energy particles and a new more mathematically compatible way of thinking about “cut” diagrams, respectively.

On Friday, João Penedones gave us an update on a technique with some links to the effective field theory-optimization ideas that came up earlier, one that “bootstraps” whole non-perturbative amplitudes. Shota Komatsu talked about an intriguing variant of the “planar” limit, one involving large numbers of particles and a slick re-writing of infinite sums of Feynman diagrams. Grant Remmen and Cliff Cheung gave a two-parter on a bewildering variety of things that are both surprisingly like, and surprisingly unlike, string theory: important progress towards answering the question “is string theory unique?”

Friday afternoon brought the last three talks of the conference. James Drummond had more progress trying to understand the symbol letters of supersymmetric Yang-Mills, while Callum Jones showed how Feynman diagrams can apply to yet another unfamiliar field, the study of vortices and their dynamics. Lance Dixon closed the conference without any Greta Thunberg references, but with a result that explains last year’s mystery of antipodal duality. The explanation involves an even more mysterious property called antipodal self-duality, so we’re not out of work yet!

Another Window on Gravitational Waves

If you follow astronomers on twitter, you may have heard some rumblings. For the last week or so, a few big collaborations have been hyping up an announcement of “something big”.

Those who knew who those collaborations were could guess the topic. Everyone else found out on Wednesday, when the alphabet soup of NANOGrav, EPTA, PPTA, CPTA, and InPTA announced detection of a gravitational wave background.

These guys

Who are these guys? And what have they found?

You’ll notice the letters “PTA” showing up again and again here. PTA doesn’t stand for Parent-Teacher Association, but for Pulsar Timing Array. Pulsar timing arrays keep track of pulsars, special neutron stars that spin around, shooting out jets of light. The ones studied by PTAs spin so regularly that we can use them as a kind of cosmic clock, counting time by when their beams hit our telescopes. They’re so regular that, if we see them vary, the best explanation isn’t that their spinning has changed: it’s that space-time itself has.

Because of that, we can use pulsar timing arrays to detect subtle shifts in space and time, ripples in the fabric of the universe caused by enormous gravitational waves. That’s what all these collaborations are for: the Indian Pulsar Timing Array (InPTA), the Chinese Pulsar Timing Array (CPTA), the Parkes Pulsar Timing Array (PPTA), the European Pulsar Timing Array (EPTA), and the North American Nanohertz Observatory for Gravitational Waves (NANOGrav).

For a nice explanation of what they saw, read this twitter thread by Katie Mack, who unlike me is actually an astronomer. NANOGrav, in typical North American fashion, is talking the loudest about it, but in this case they kind of deserve it. They have the most data, fifteen years of measurements, letting them make the clearest case that they are actually seeing evidence of gravitational waves. (And not, as an earlier measurement of theirs saw, Jupiter.)

We’ve seen evidence of gravitational waves before of course, most recently from the gravitational wave observatories LIGO and VIRGO. LIGO and VIRGO could pinpoint their results to colliding black holes and neutrons stars, estimating where they were and how massive. The pulsar timing arrays can’t quite do that yet, even with fifteen years of data. They expect that the waves they are seeing come from colliding black holes as well, but much larger ones: with pulsars spread over a galaxy, the effects they detect are from black holes big enough to be galactic cores. Rather than one at a time, they would see a chorus of many at once, a gravitational wave background (though not to be confused with a cosmic gravitational wave background: this would be from black holes close to the present day, not from the origin of the universe). If it is this background, then they’re seeing a bit more of the super-massive black holes than people expected. But for now, they’re not sure: they can show they’re seeing gravitational waves, but so far not much more.

With that in mind, it’s best to view the result, impressive as it is, as a proof of principle. Much as LIGO showed, not that gravitational waves exist at all, but that it is possible for us to detect them, these pulsar timing arrays have shown that it is possible to detect the gravitational wave background on these vast scales. As the different arrays pool their data and gather more, the technique will become more and more useful. We’ll start learning new things about the life-cycles of black holes and galaxies, about the shape of the universe, and maybe if we’re lucky some fundamental physics too. We’ve opened up a new window, making sure it’s bright enough we can see. Now we can sit back, and watch the universe.

Solutions and Solutions

The best misunderstandings are detective stories. You can notice when someone is confused, but digging up why can take some work. If you manage, though, you learn much more than just how to correct the misunderstanding. You learn something about the words you use, and the assumptions you make when using them.

Recently, someone was telling me about a book they’d read on Karl Schwarzschild. Schwarzschild is famous for discovering the equations that describe black holes, based on Einstein’s theory of gravitation. To make the story more dramatic, he did so only shortly before dying from a disease he caught fighting in the first World War. But this person had the impression that Schwarzschild had done even more. According to this person, the book said that Schwarzschild had done something to prove Einstein’s theory, or to complete it.

Another Schwarzschild accomplishment: that mustache

At first, I thought the book this person had read was wrong. But after some investigation, I figured out what happened.

The book said that Schwarzschild had found the first exact solution to Einstein’s equations. That’s true, and as a physicist I know precisely what it means. But I now realize that the average person does not.

In school, the first equations you solve are algebraic, x+y=z. Some equations, like x^2=4, have solutions. Others, like x^2=-4, seem not to, until you learn about new types of numbers that solve them. Either way, you get used to equations being like a kind of puzzle, a question for which you need to find an answer.

If you’re thinking of equations like that, then it probably sounds like Schwarzschild “solved the puzzle”. If Schwarzschild found the first solution to Einstein’s equation, that means that Einstein did not. That makes it sound like Einstein’s work was incomplete, that he had asked the right question but didn’t yet know the right answer.

Einstein’s equations aren’t algebraic equations, though. They’re differential equations. Instead of equations for a variable, they’re equations for a mathematical function, a formula that, in this case, describes the curvature of space and time.

Scientists in many fields use differential equations, but they use them in different ways. If you’re a chemist or a biologist, it might be that you’re most used to differential equations with simple solutions, like sines, cosines, or exponentials. You learn how to solve these equations, and they feel a bit like the algebraic ones: you have a puzzle, and then you solve the puzzle.

Other fields, though, have tougher differential equations. If you’re a physicist or an engineer, you’ve likely met differential equations that you can’t treat in this way. If you’re dealing with fluid mechanics, or general relativity, or even just Newtonian gravity in an odd situation, you can’t usually solve the problem by writing down known functions like sines and cosines.

That doesn’t mean you can’t solve the problem at all, though!

Even if you can’t write down a solution to a differential equation with sines and cosines, a solution can still exist. (In some cases, we can even prove a solution exists!) It just won’t be written in terms of sines and cosines, or other functions you’ve learned in school. Instead, the solution will involve some strange functions, functions no-one has heard of before.

If you want, you can make up names for those functions. But unless you’re going to classify them in a useful way, there’s not much point. Instead, you work with these functions by approximation. You calculate them in a way that doesn’t give you the full answer, but that does let you estimate how close you are. That’s good enough to give you numbers, which in turn is good enough to compare to experiments. With just an approximate solution, like this, Einstein could check if his equations described the orbit of Mercury.

Once you know you can find these approximate solutions, you have a different perspective on equations. An equation isn’t just a mysterious puzzle. If you can approximate the solution, then you already know how to solve that puzzle. So we wouldn’t think of Einstein’s theory as incomplete because he was only able to find approximate solutions: for a theory as complicated as Einstein’s, that’s perfectly normal. Most of the time, that’s all we need.

But it’s still pretty cool when you don’t have to do this. Sometimes, we can not just approximate, but actually “write down” the solution, either using known functions or well-classified new ones. We call a solution like that an analytic solution, or an exact solution.

That’s what Schwarzschild managed. These kinds of exact solutions often only work in special situations, and Schwarzschild’s is no exception. His Schwarzschild solution works for matter in a special situation, arranged in a perfect sphere. If matter happened to be arranged in that way, then the shape of space and time would be exactly as Schwarzschild described it.

That’s actually pretty cool! Einstein’s equations are complicated enough that no-one was sure that there were any solutions like that, even in very special situations. Einstein expected it would be a long time until they could do anything except approximate solutions.

(If Schwarzschild’s solution only describes matter arranged in a perfect sphere, why do we think it describes real black holes? This took later work, by people like Roger Penrose, who figured out that matter compressed far enough will always find a solution like Schwarzschild’s.)

Schwarzschild intended to describe stars with his solution, or at least a kind of imaginary perfect star. What he found was indeed a good approximation to real stars, but also the possibility that a star shoved into a sufficiently small space would become something weird and new, something we would come to describe as a black hole. That’s a pretty impressive accomplishment, especially for someone on the front lines of World War One. And if you know the difference between an exact solution and an approximate one, you have some idea of what kind of accomplishment that is.