HAMLET-Physics 2024

Back in January, I announced I was leaving France and leaving academia. Since then, it hasn’t made much sense for me to go to conferences, even the big conference of my sub-field or the conference I organized.

I did go to a conference this week, though. I had two excuses:

  1. The conference was here in Copenhagen, so no travel required.
  2. The conference was about machine learning.

HAMLET-Physics, or How to Apply Machine Learning to Experimental and Theoretical Physics, had the additional advantage of having an amusing acronym. Thanks to generous support by Carlsberg and the Danish Data Science Academy, they could back up their choice by taking everyone on a tour of Kronborg (better known in the English-speaking world as Elsinore).

This conference’s purpose was to bring together physicists who use machine learning, machine learning-ists who might have something useful to say to those physicists, and other physicists who don’t use machine learning yet but have a sneaking suspicion they might have to at some point. As a result, the conference was super-interdisciplinary, with talks by people addressing very different problems with very different methods.

Interdisciplinary conferences are tricky. It’s easy for the different groups of people to just talk past each other: everyone shows up, gives the same talk they always do, socializes with the same friends they always meet, then leaves.

There were a few talks that hit that mold, and were so technical only a few people understood. But most were better. The majority of the speakers did really well at presenting their work in a way that would be understandable and even exciting to people outside their field, while still having enough detail that we all learned something. I was particularly impressed by Thea Aarestad’s keynote talk on Tuesday, a really engaging view of how machine learning can be used under the extremely tight time constraints LHC experiments need to decide whether to record incoming data.

For the social aspect, the organizers had a cute/gimmicky/machine-learning-themed solution. Based on short descriptions and our public research profiles, they clustered attendees, plotting the connections between them. They then used ChatGPT to write conversation prompts between any two people on the basis of their shared interests. In practice, this turned out to be amusing but totally unnecessary. We were drawn to speak to each other not by conversation prompts, but by a drive to learn from each other. “Why do you do it that way?” was a powerful conversation-starter, as was “what’s the best way to do this?” Despite the different fields, the shared methodologies gave us strong reasons to talk, and meant that people were very rarely motivated to pick one of ChatGPT’s “suggestions”.

Overall, I got a better feeling for how machine learning is useful in physics (and am planning a post on that in future). I also got some fresh ideas for what to do myself, and a bit of a picture of what the future holds in store.

Why Quantum Gravity Is Controversial

Merging quantum mechanics and gravity is a famously hard physics problem. Explaining why merging quantum mechanics and gravity is hard is, in turn, a very hard science communication problem. The more popular descriptions tend to lead to misunderstandings, and I’ve posted many times over the years to chip away at those misunderstandings.

Merging quantum mechanics and gravity is hard…but despite that, there are proposed solutions. String Theory is supposed to be a theory of quantum gravity. Loop Quantum Gravity is supposed to be a theory of quantum gravity. Asymptotic Safety is supposed to be a theory of quantum gravity.

One of the great virtues of science and math is that we are, eventually, supposed to agree. Philosophers and theologians might argue to the end of time, but in math we can write down a proof, and in science we can do an experiment. If we don’t yet have the proof or the experiment, then we should reserve judgement. Either way, there’s no reason to get into an unproductive argument.

Despite that, string theorists and loop quantum gravity theorists and asymptotic safety theorists, famously, like to argue! There have been bitter, vicious, public arguments about the merits of these different theories, and decades of research doesn’t seem to have resolved them. To an outside observer, this makes quantum gravity seem much more like philosophy or theology than like science or math.

Why is there still controversy in quantum gravity? We can’t do quantum gravity experiments, sure, but if that were the problem physicists could just write down the possibilities and leave it at that. Why argue?

Some of the arguments are for silly aesthetic reasons, or motivated by academic politics. Some are arguments about which approaches are likely to succeed in future, which as always is something we can’t actually reliably judge. But the more justified arguments, the strongest and most durable ones, are about a technical challenge. They’re about something called non-perturbative physics.

Most of the time, when physicists use a theory, they’re working with an approximation. Instead of the full theory, they’re making an assumption that makes the theory easier to use. For example, if you assume that the velocity of an object is small, you can use Newtonian physics instead of special relativity. Often, physicists can systematically relax these assumptions, including more and more of the behavior of the full theory and getting a better and better approximation to the truth. This process is called perturbation theory.

Other times, this doesn’t work well. The full theory has some trait that isn’t captured by the approximations, something that hides away from these systematic tools. The theory has some important aspect that is non-perturbative.

Every proposed quantum gravity theory uses approximations like this. The theory’s proponents try to avoid these approximations when they can, but often they have to approximate and hope they don’t miss too much. The opponents, in turn, argue that the theory’s proponents are missing something important, some non-perturbative fact that would doom the theory altogether.

Asymptotic Safety is built on top of an approximation, one different from what other quantum gravity theorists typically use. To its proponents, work using their approximation suggests that gravity works without any special modifications, that the theory of quantum gravity is easier to find than it seems. Its opponents aren’t convinced, and think that the approximation is missing something important which shows that gravity needs to be modified.

In Loop Quantum Gravity, the critics think their approximation misses space-time itself. Proponents of Loop Quantum Gravity have been unable to prove that their theory, if you take all the non-perturbative corrections into account, doesn’t just roll up all of space and time into a tiny spiky ball. They expect that their theory should allow for a smooth space-time like we experience, but the critics aren’t convinced, and without being able to calculate the non-perturbative physics neither side can convince the other.

String Theory was founded and originally motivated by perturbative approximations. Later, String Theorists figured out how to calculate some things non-perturbatively, often using other simplifications like supersymmetry. But core questions, like whether or not the theory allows a positive cosmological constant, seem to depend on non-perturbative calculations that the theory gives no instructions for how to do. Some critics don’t think there is a consistent non-perturbative theory at all, that the approximations String Theorists use don’t actually approximate to anything. Even within String Theory, there are worries that the theory might try to resist approximation in odd ways, becoming more complicated whenever a parameter is small enough that you could use it to approximate something.

All of this would be less of a problem with real-world evidence. Many fields of science are happy to use approximations that aren’t completely rigorous, as long as those approximations have a good track record in the real world. In general though, we don’t expect evidence relevant to quantum gravity any time soon. Maybe we’ll get lucky, and studies of cosmology will reveal something, or an experiment on Earth will have a particularly strange result. But nature has no obligation to help us out.

Without evidence, though, we can still make mathematical progress. You could imagine someone proving that the various perturbative approaches to String Theory become inconsistent when stitched together into a full non-perturbative theory. Alternatively, you could imagine someone proving that a theory like String Theory is unique, that no other theory can do some key thing that it does. Either of these seems unlikely to come any time soon, and most researchers in these fields aren’t pursuing questions like that. But the fact the debate could be resolved means that it isn’t just about philosophy or theology. There’s a real scientific, mathematical controversy, one rooted in our inability to understand these theories beyond the perturbative methods their proponents use. And while I don’t expect it to be resolved any time soon, one can always hold out hope for a surprise.

Toy Models

In academia, scientists don’t always work with what they actually care about. A lot of the time, they use what academics call toy models. A toy model can be a theory with simpler mathematics than the theories that describe the real world, but it can also be something that is itself real, just simpler or easier to work with, like nematodes, fruit flies, or college students.

Some people in industry seem to think this is all academics do. I’ve seen a few job ads that emphasize experience dealing with “real-world data”, and a few people skeptical that someone used to academia would be able to deal with the messy challenges of the business world.

There’s a grain of truth to this, but I don’t think industry has a monopoly on mess. To see why, let’s think about how academics write computer code.

There are a lot of things that one is in-principle supposed to do to code well, and most academics do none of them. Good code has test suites, so that if you change something you can check whether it still works by testing it on all the things that could go wrong. Good code is modular, with functions doing specific things and re-used whenever appropriate. Good code follows shared conventions, so that others can pick up your code and understand how you did it.

Some academics do these things, for example those who build numerical simulations on supercomputers. But for most academics, coding best-practices range from impractical to outright counterproductive. Testing is perhaps the clearest example. To design a test suite, you have to have some idea what kinds of things your code will run into, what kind of input you expect what the output is supposed to be. Many academic projects, though, are the first of their kind. Academics code up something to do a calculation nobody has done before, not knowing the result, or they make code to analyze a dataset nobody has worked with before. By the time they understand the problem well enough to write a test suite, they’ve already solved the problem, and they’re on to the next project, which may need something totally different.

From the perspective of these academics, if you have a problem well-defined enough that you can build a test suite, well enough that you can have stable conventions and reusable functions…then you have a toy model, not a real problem from the real world.

…and of course, that’s not quite fair either, right?

The truth is, academics and businesspeople want to work with toy models. Toy models are well-behaved, and easy, and you can do a lot with them. The real world isn’t a toy model…but it can be, if you make it one.

This means planning your experiments, whether in business or in science. It means making sure the data you gather is labeled and organized before you begin. It means coming up with processes, and procedures, and making as much of the work as possible a standardized, replicable thing. That’s desirable regardless, whether you’re making a consistent product instead of artisanal one-offs or a well-documented scientific study that another team can replicate.

Academia and industry both must handle mess. They handle different kinds of mess in different circumstances, and manage it in different ways, and this can be a real challenge for someone trying to go from one world to another. But neither world is intrinsically messier or cleaner. Nobody has a monopoly on toy models.

The “Who” of Fixing Academic Publishing

I was on the debate team in high school. There’s a type of debate, called Policy, where one team proposes a government policy and the other team argues the policy is bad. The rules of Policy debate don’t say who the debaters are pretending to be: they could be congresspeople, cabinet members, or staff at a think tank. This creates ambiguity, and nerds are great at exploiting ambiguity. A popular strategy was to argue that the opponents had a perfectly good policy, but were wrong about who should implement it. This had reasonable forms (no, congress does not have the power to do X) but could also get very silly (the crux of one debate was whether the supreme court or the undersecretary of the TSA was the best authority to usher in a Malthusian dictatorship). When debating policy, “who” could be much more important than “what”.

Occasionally, when I see people argue that something needs to be done, I ask myself this question. Who, precisely, should do it?

Recently, I saw a tweet complaining about scientific publishing. Physicists put their work out for free on arXiv.org, then submit that work to journals, which charge huge fees either to the scientists themselves or to libraries that want access to the work. It’s a problem academics complain about frequently, but usually we act like it’s something we should fix ourselves, a kind of grassroots movement to change our publication and hiring culture.

This tweet, surprisingly, didn’t do that. Instead, it seemed to have a different “who” in mind. The tweet argued that the stranglehold of publishers like Elsevier on academic publishing is a waste of taxpayer money. The implication, maybe intended maybe not, is that the problem should be fixed by the taxpayers: that is, by the government.

Which in turn got me thinking, what could that look like?

I could imagine a few different options, from the kinds of things normal governments do to radical things that would probably never happen.

First, the most plausible strategy: collective negotiation. Particle physicists don’t pay from our own grants to publish papers, and we don’t pay to read them. Instead, we have a collective agreement, called SCOAP3, where the big institutions pay together each year to guarantee open access. The University of California system tried to negotiate a similar agreement a few years back, not just for physicists but for all fields. You could imagine governments leaning on this, with the university systems of whole countries negotiating a fixed payment. The journals would still be getting paid, but less.

Second, less likely but not impossible: governments could use the same strategies against the big publishers that they use against other big companies. This could be antitrust action (if you have to publish in Nature to get hired, are they really competing with anybody?), or even some kind of price controls. The impression I get is that when governments do try to change scientific publishing they usually do it via restrictions on the scientists (such as requiring them to publish open-access), while this would involve restrictions on the publishers.

Third, governments could fund alternative institutions to journals. They could put more money into websites like arXiv.org and its equivalents in other fields or fund an alternate review process to vet papers like journal referees do. There are existing institutions they could build on, or they could create their own.

Fourth, you could imagine addressing the problem on the job market side, with universities told not to weigh the prestige of journals when considering candidates. This seems unlikely to happen, and that’s probably a good thing, because it’s very micromanagey. Still, I do think that both grants and jobs could do with less time and effort spent attempting to vet candidates and more explicit randomness.

Fifth, you could imagine governments essentially opting out of the game altogether. They could disallow spending any money from publicly funded grants or universities on open-access fees or subscription fees, pricing most scientists out of the journal system. Journals would either have to radically lower their prices so that scientists could pay for them out of pocket, or more likely go extinct. This does have the problem that if only some countries did it, their scientists would have a harder time in other countries’ job markets. And of course, many critics of journals just want the journals to make less obscene profits, and not actually go extinct.

Most academics I know agree that something is deeply wrong with how academic journals work. While the situation might be solved at the grassroots level, it’s worth imagining what governments might do. Realistically, I don’t expect them to do all that much. But stranger things have gotten political momentum before.

At Quanta This Week, With a Piece on Vacuum Decay

I have a short piece at Quanta Magazine this week, about a physics-y end of the world as we know it called vacuum decay.

For science-minded folks who want to learn a bit more: I have a sentence in the article mentioning other uncertainties. In case you’re curious what those uncertainties are:

Gamma (\gamma) here is the decay rate, its inverse gives the time it takes for a cubic gigaparsec of space to experience vacuum decay. The three uncertainties are from experiments, the uncertainties of our current knowledge of the Higgs mass, top quark mass, and the strength of the strong force.

Occasionally, you see futurology-types mention “uncertainties in the exponent” to argue that some prediction (say, how long it will take till we have human-level AI) is so uncertain that estimates barely even make sense: it might be 10 years, or 1000 years. I find it fun that for vacuum decay, because of that \log_{10}, there is actually uncertainty in the exponent! Vacuum decay might happen in as few as 10^{411} years or as many as 10^{1333} years, and that’s the result of an actual, reasonable calculation!

For physicist readers, I should mention that I got a lot out of reading some slides from a 2016 talk by Matthew Schwartz. Not many details of the calculation made it into the piece, but the slides were helpful in dispelling a few misconceptions that could have gotten into the piece. There’s an instinct to think about the situation in terms of the energy, to think about how difficult it is for quantum uncertainty to get you over the energy barrier to the next vacuum. There are methods that sort of look like that, if you squint, but that’s not really how you do the calculation, and there end up being a lot of interesting subtleties in the actual story. There were also a few numbers that it was tempting to put on the plots in the article, but turn out to be gauge dependent!

Another thing I learned from those slides how far you can actually take the uncertainties mentioned above. The higher-energy Higgs vacuum is pretty dang high-energy, to the point where quantum gravity effects could potentially matter. And at that point, all bets are off. The calculation, with all those nice uncertainties, is a calculation within the framework of the Standard Model. All of the things we don’t yet know about high-energy physics, especially quantum gravity, could freely mess with this. The universe as we know it could still be long-lived, but it could be a lot shorter-lived as well. That in turns makes this calculation a lot more of a practice-ground to hone techniques, rather than an actual estimate you can rely on.

Rube Goldberg Reality

Quantum mechanics is famously unintuitive, but the most intuitive way to think about it is probably the path integral. In the path integral formulation, to find the chance a particle goes from point A to point B, you look at every path you can draw from one place to another. For each path you calculate a complex number, a “weight” for that path. Most of these weights cancel out, leaving the path the particle would travel under classical physics with the biggest contribution. They don’t perfectly cancel out, though, so the other paths still matter. In the end, the way the particle behaves depends on all of these possible paths.

If you’ve heard this story, it might make you feel like you have some intuition for how quantum physics works. With each path getting less likely as it strays from the classical, you might have a picture of a nice orderly set of options, with physicists able to pick out the chance of any given thing happening based on the path.

In a world with just one particle swimming along, this might not be too hard. But our world doesn’t run on the quantum mechanics of individual particles. It runs on quantum field theory. And there, things stop being so intuitive.

First, the paths aren’t “paths”. For particles, you can imagine something in one place, traveling along. But particles are just ripples in quantum fields, which can grow, shrink, or change. For quantum fields instead of quantum particles, the path integral isn’t a sum over paths of a single particle, but a sum over paths traveled by fields. The fields start out in some configuration (which may look like a particle at point A) and then end up in a different configuration (which may look like a particle at point B). You have to add up weights, not for every path a single particle could travel, but every different set of ways the fields could have been in between configuration A and configuration B.

More importantly, though, there is more than one field! Maybe you’ve heard about electric and magnetic fields shifting back and forth in a wave of light, one generating the other. Other fields interact like this, including the fields behind things you might think of as particles like electrons. For any two fields that can affect each other, a disturbance in one can lead to a disturbance in the other. An electromagnetic field can disturb the electron field, which can disturb the Higgs field, and so on.

The path integral formulation tells you that all of these paths matter. Not just the path of one particle or one field chugging along by itself, but the path where the electromagnetic field kicks off a Higgs field disturbance down the line, only to become a disturbance in the electromagnetic field again. Reality is all of these paths at once, a Rube Goldberg machine of a universe.

In such a universe, intuition is a fool’s errand. Mathematics fares a bit better, but is still difficult. While physicists sometimes have shortcuts, most of the time these calculations have to be done piece by piece, breaking the paths down into simpler stories that approximate the true answer.

In the path integral formulation of quantum physics, everything happens at once. And “everything” may be quite a bit larger than you expect.

Musing on Application Fees

A loose rule of thumb: PhD candidates in the US are treated like students. In Europe, they’re treated like employees.

This does exaggerate things a bit. In both Europe and the US, PhD candidates get paid a salary (at least in STEM). In both places, PhD candidates count as university employees, if sometimes officially part-time ones, with at least some of the benefits that entails.

On the other hand, PhD candidates in both places take classes (albeit more classes in the US). Universities charge both for tuition, which is in turn almost always paid by their supervisor’s grants or department, not by them. Both aim for a degree, capped off with a thesis defense.

But there is a difference. And it’s at its most obvious in how applications work.

In Europe, PhD applications are like job applications. You apply to a particular advisor, advertising a particular kind of project. You submit things like a CV, cover letter, and publication list, as well as copies of your previous degrees.

In the US, PhD applications are like applications to a school. You apply to the school, perhaps mentioning an advisor or topic you are interested in. You submit things like essays, test scores, and transcripts. And typically, you have to pay an application fee.

I don’t think I quite appreciated, back when I applied for PhD programs, just how much those fees add up to. With each school charging a fee in the $100 range, and students commonly advised to apply to ten or so schools, applying to PhD programs in the US can quickly get unaffordable for many. Schools do offer fee waivers under certain conditions, but the standards vary from school to school. Most don’t seem to apply to non-Americans, so if you’re considering a US PhD from abroad be aware that just applying can be an expensive thing to do.

Why the fee? I don’t really know. The existence of application fees, by itself, isn’t a US thing. If you want to get a Master’s degree from the University of Copenhagen and you’re coming from outside Europe, you have to pay an application fee of roughly the same size that US schools charge.

Based on that, I’d guess part of the difference is funding. It costs something for a university to process an application, and governments might be willing to cover it for locals (in the case of the Master’s in Copenhagen) or more specifically for locals in need (in the US PhD case). I don’t know whether it makes sense for that cost to be around $100, though.

It’s also an incentive, presumably. Schools don’t want too many applicants, so they attach a fee so only the most dedicated people apply.

Jobs don’t typically have an application fee, and I think it would piss a lot of people off if they did. Some jobs get a lot of applicants, enough that bigger and more well-known companies in some places use AI to filter applications. I have to wonder if US PhD schools are better off in this respect. Does charging a fee mean they have a reasonable number of applications to deal with? Or do they still have to filter through a huge pile, with nothing besides raw numbers to pare things down? (At least, because of the “school model” with test scores, they have some raw numbers to use.)

Overall, coming at this with a “theoretical physicist mentality”, I have to wonder if any of this is necessary. Surely there’s a way to make it easy for students to apply, and just filter them down to the few you want to accept? But the world is of course rarely that simple.

Clickbait or Koan

Last month, I had a post about a type of theory that is, in a certain sense, “immune to gravity”. These theories don’t allow you to build antigravity machines, and they aren’t totally independent of the overall structure of space-time. But they do ignore the core thing most people think of as gravity, the curvature of space that sends planets around the Sun and apples to the ground. And while that trait isn’t something we can use for new technology, it has led to extremely productive conversations between mathematicians and physicists.

After posting, I had some interesting discussions on twitter. A few people felt that I was over-hyping things. Given all the technical caveats, does it really make sense to say that these theories defy gravity? Isn’t a title like “Gravity-Defying Theories” just clickbait?

Obviously, I don’t think so.

There’s a concept in education called inductive teaching. We remember facts better when they come in context, especially the context of us trying to solve a puzzle. If you try to figure something out, and then find an answer, you’re going to remember that answer better than if you were just told the answer from the beginning. There are some similarities here to the concept of a Zen koan: by asking questions like “what is the sound of one hand clapping?” a Zen master is supposed to get you to think about the world in a different way.

When I post with a counterintuitive title, I’m aiming for that kind of effect. I know that you’ll read the title and think “that can’t be right!” Then you’ll read the post, and hear the explanation. That explanation will stick with you better because you asked that question, because “how can that be right?” is the solution to a puzzle that, in that span of words, you cared about.

Clickbait is bad for two reasons. First, it sucks you in to reading things that aren’t actually interesting. I write my blog posts because I think they’re interesting, so I hope I avoid that. Second, it can spread misunderstandings. I try to be careful about these, and I have some tips how you can be too:

  1. Correct the misunderstanding early. If I’m worried a post might be misunderstood in a clickbaity way, I make sure that every time I post the link I include a sentence discouraging the misunderstanding. For example, for the post on Gravity-Defying Theories, before the link I wrote “No flying cars, but it is technically possible for something to be immune to gravity”. If I’m especially worried, I’ll also make sure that the first paragraph of the piece corrects the misunderstanding as well.
  2. Know your audience. This means both knowing the normal people who read your work, and how far something might go if it catches on. Your typical readers might be savvy enough to skip the misunderstanding, but if they latch on to the naive explanation immediately then the “koan” effect won’t happen. The wider your reach can be, the more careful you need to be about what you say. If you’re a well-regarded science news piece, don’t write a title saying that scientists have built a wormhole.
  3. Have enough of a conclusion to be “worth it”. This is obviously a bit subjective. If your post introduces a mystery and the answer is that you just made some poetic word choice, your audience is going to feel betrayed, like the puzzle they were considering didn’t have a puzzly answer after all. Whatever you’re teaching in your post, it needs to have enough “meat” that solving it feels like a real discovery, like the reader did some real work to solve it.

I don’t think I always live up to these, but I do try. And I think trying is better than the conservative option, of never having catchy titles that make counterintuitive claims. One of the most fun aspects of science is that sometimes a counterintuitive fact is actually true, and that’s an experience I want to share.

Amplitudes 2024, Continued

I’ve now had time to look over the rest of the slides from the Amplitudes 2024 conference, so I can say something about Thursday and Friday’s talks.

Thursday was gravity-focused. Zvi Bern’s review talk was actually a review, a tour of the state of the art in using amplitudes techniques to make predictions for gravitational wave physics. Bern emphasized that future experiments will require much more precision: two more orders of magnitude, which in our lingo amounts to two more “loops”. The current state of the art is three loops, but they’ve been hacking away at four, doing things piece by piece in a way that cleverly also yields publications (for example, they can do just the integrals needed for supergravity, which are simpler). Four loops here is the first time that the Feynman diagrams involve Calabi-Yau manifolds, so they will likely need techniques from some of the folks I talked about last week. Once they have four loops, they’ll want to go to five, since that is the level of precision you need to learn something about the material in neutron stars. The talk covered a variety of other developments, some of which were talked about later on Thursday and some of which were only mentioned here.

Of that day’s other speakers, Stefano De Angelis, Lucile Cangemi, Mikhail Ivanov, and Alessandra Buonanno also focused on gravitational waves. De Angelis talked about the subtleties that show up when you try to calculate gravitational waveforms directly with amplitudes methods, showcasing various improvements to the pipeline there. Cangemi talked about a recurring question with its own list of subtleties, namely how the Kerr metric for spinning black holes emerges from the math of amplitudes of spinning particles. Gravitational waves were the focus of only the second half of Ivanov’s talk, where he talked about how amplitudes methods can clear up some of the subtler effects people try to take into account. The first half was about another gravitational application, that of using amplitudes methods to compute the correlations of galaxy structures in the sky, a field where it looks like a lot of progress can be made. Finally, Buonanno gave the kind of talk she’s given a few times at these conferences, a talk that puts these methods in context, explaining how amplitudes results are packaged with other types of calculations into the Effective-One-Body framework which then is more directly used at LIGO. This year’s talk went into more detail about what the predictions are actually used for, which I appreciated. I hadn’t realized that there have been a handful of black hole collisions discovered by other groups from LIGO’s data, a win for open science! Her slides had a nice diagram explaining what data from the gravitational wave is used to infer what black hole properties, quite a bit more organized than the statistical template-matching I was imagining. She explained the logic behind Bern’s statement that gravitational wave telescopes will need two more orders of magnitude, pointing out that that kind of precision is necessary to be sure that something that might appear to be a deviation from Einstein’s theory of gravity is not actually a subtle effect of known physics. Her method typically is adjusted to fit numerical simulations, but she shows that even without that adjustment they now fit the numerics quite well, thanks in part to contributions from amplitudes calculations.

Of the other talks that day, David Kosower’s was the only one that didn’t explicitly involve gravity. Instead, his talk focused on a more general question, namely how to find a well-defined basis of integrals for Feynman diagrams, which turns out to involve some rather subtle mathematics and geometry. This is a topic that my former boss Jake Bourjaily worked on in a different context for some time, and I’m curious whether there is any connection between the two approaches. Oliver Schlotterer gave the day’s second review talk, once again of the “actually a review” kind, covering a variety of recent developments in string theory amplitudes. These include some new pictures of how string theory amplitudes that correspond to Yang-Mills theories “square” to amplitudes involving gravity at higher loops and progress towards going past two loops, the current state of the art for most string amplitude calculations. (For the experts: this does not involve taking the final integral over the moduli space, which is still a big unsolved problem.) He also talked about progress by Sebastian Mizera and collaborators in understanding how the integrals that show up in string theory make sense in the complex plane. This is a problem that people had mostly managed to avoid dealing with because of certain simplifications in the calculations people typically did (no moduli space integration, expansion in the string length), but taking things seriously means confronting it, and Mizera and collaborators found a novel solution to the problem that has already passed a lot of checks. Finally, Tobias Hansen’s talk also related to string theory, specifically in anti-de-Sitter space, where the duality between string theory and N=4 super Yang-Mills lets him and his collaborators do Yang-Mills calculations and see markedly stringy-looking behavior.

Friday began with Kevin Costello, whose not-really-a-review talk dealt with his work with Natalie Paquette showing that one can use an exactly-solvable system to learn something about QCD. This only works for certain rather specific combinations of particles: for example, in order to have three colors of quarks, they need to do the calculation for nine flavors. Still, they managed to do a calculation with this method that had not previously been done with more traditional means, and to me it’s impressive that anything like this works for a theory without supersymmetry. Mina Himwich and Diksha Jain both had talks related to a topic of current interest, “celestial” conformal field theory, a picture that tries to apply ideas from holography in which a theory on the boundary of a space fully describes the interior, to the “boundary” of flat space, infinitely far away. Himwich talked about a symmetry observed in that research program, and how that symmetry can be seen using more normal methods, which also lead to some suggestions of how the idea might be generalized. Jain likewise covered a different approach, one in which one sets artificial boundaries in flat space and sees what happens when those boundaries move.

Yifei He described progress in the modern S-matrix bootstrap approach. Previously, this approach had gotten quite general constraints on amplitudes. She tries to do something more specific, and predict the S-matrix for scattering of pions in the real world. By imposing compatibility with knowledge from low energies and high energies, she was able to find a much more restricted space of consistent S-matrices, and these turn out to actually match pretty well to experimental results. Mathieu Giroux addresses an important question for a variety of parts of amplitudes research, how to predict the singularities of Feynman diagrams. He explored a recursive approach to solving Landau’s equations for these singularities, one which seems impressively powerful, in one case being able to find a solution that in text form is approximately the length of Harry Potter. Finally, Juan Maldacena closed the conference by talking about some progress he’s made towards an old idea, that of defining M theory in terms of a theory involving actual matrices. This is a very challenging thing to do, but he is at least able to tackle the simplest possible case, involving correlations between three observations. This had a known answer, so his work serves mostly as a confirmation that the original idea makes sense at at least this level.

Beyond Elliptic Polylogarithms in Oaxaca

Arguably my biggest project over the last two years wasn’t a scientific paper, a journalistic article, or even a grant application. It was a conference.

Most of the time, when scientists organize a conference, they do it “at home”. Either they host the conference at their own university, or rent out a nearby event venue. There is an alternative, though. Scattered around the world, often in out-of-the way locations, are places dedicated to hosting scientific conferences. These places accept applications each year from scientists arguing that their conference would best serve the place’s scientific mission.

One of these places is the Banff International Research Station in Alberta, Canada. Since 2001, Banff has been hosting gatherings of mathematicians from around the world, letting them focus on their research in an idyllic Canadian ski resort.

If you don’t like skiing, though, Banff still has you covered! They have “affiliate centers” elsewhere, with one elsewhere in Canada, one in China, two on the way in India and Spain…and one, that particularly caught my interest, in Oaxaca, Mexico.

Back around this time of year in 2022, I started putting a proposal together for a conference at the Casa Mathemática Oaxaca. The idea would be a conference discussing the frontier of the field, how to express the strange mathematical functions that live in Feynman diagrams. I assembled a big team of co-organizers, five in total. At the time, I wasn’t sure whether I could find a permanent academic job, so I wanted to make sure there were enough people involved that they could run the conference without me.

Followers of the blog know I did end up finding that permanent job…only to give it up. In the end, I wasn’t able to make it to the conference. But my four co-organizers were (modulo some delays in the Houston airport). The conference was this week, with the last few talks happening over the next few hours.

I gave a short speech via Zoom at the beginning of the conference, a mix of welcome and goodbye. Since then I haven’t had the time to tune in to the talks, but they’re good folks and I suspect they’re having good discussions.

I do regret that, near the end, I wasn’t able to give the conference the focus it deserved. There were people we really hoped to have, but who couldn’t afford the travel. I’d hoped to find a source of funding that could support them, but the plan fell through. The week after Amplitudes 2024 was also a rough time to have a conference in this field, with many people who would have attended not able to go to both. (At least they weren’t the same week, thanks to some flexibility on the part of the Amplitudes organizers!)

Still, it’s nice to see something I’ve been working on for two years finally come to pass, to hopefully stir up conversations between different communities and give various researchers a taste of one of Mexico’s most beautiful places. I still haven’t been to Oaxaca yet, but I suspect I will eventually. Danish companies do give at minimum five weeks of holiday per year, so I should get a chance at some point.