Tag Archives: LHC

Bonus Info on the LHC and Beyond

Three of my science journalism pieces went up last week!

(This is a total coincidence. One piece was a general explainer “held in reserve” for a nice slot in the schedule, one was a piece I drafted in February, while the third I worked on in May. In journalism, things take as long as they take.)

The shortest piece, at Quanta Magazine, was an explainer about the two types of particles in physics: bosons, and fermions.

I don’t have a ton of bonus info here, because of how tidy the topic is, so just two quick observations.

First, I have the vague impression that Bose, bosons’ namesake, is “claimed” by both modern-day Bangladesh and India. I had friends in grad school who were proud of their fellow physicist from Bangladesh, but while he did his most famous work in Dhaka, he was born and died in Calcutta. Since both were under British India for most of his life, these things likely get complicated.

Second, at the end of the piece I mention a “world on a wire” where fermions and bosons are the same. One example of such a “wire” is a string, like in string theory. One thing all young string theorists learn is “bosonization”: the idea that, in a 1+1-dimensional world like a string, you can re-write any theory with fermions as a theory with bosons, as well as vice versa. This has important implications for how string theory is set up.

Next, in Ars Technica, I had a piece about how LHC physicists are using machine learning to untangle the implications of quantum interference.

As a journalist, it’s really easy to fall into a trap where you give the main person you interview too much credit: after all, you’re approaching the story from their perspective. I tried to be cautious about this, only to be stymied when literally everyone else I interviewed praised Aishik Ghosh to the skies and credited him with being the core motivating force behind the project. So I shrugged my shoulders and followed suit. My understanding is that he has been appropriately rewarded and will soon be a professor at Georgia Tech.

I didn’t list the inventors of the NSBI method that Ghosh and co. used, but names like Kyle Cranmer and Johann Brehmer tend to get bandied about. It’s a method that was originally explored for a more general goal, trying to characterize what the Standard Model might be missing, while the work I talk about in the piece takes it in a new direction, closer to the typical things the ATLAS collaboration looks for.

I also did not say nearly as much as I was tempted to about how the ATLAS collaboration publishes papers, which was honestly one of the most intriguing parts of the story for me. There is a huge amount of review that goes on inside ATLAS before one of their papers reaches the outside world, way more than there ever is in a journal’s peer review process. This is especially true for “physics papers”, where ATLAS is announcing a new conclusion about the physical world, as ATLAS’s reputation stands on those conclusions being reliable. That means starting with an “internal note” that’s hundreds of pages long (and sometimes over a thousand), an editorial board that manages the editing process, disseminating the paper to the entire collaboration for comment, and getting specific experts and institute groups within the collaboration to read through the paper in detail. The process is a bit less onerous for “technical papers”, which describe a new method, not a new conclusion about the world. Still, it’s cumbersome enough that for those papers, often scientists don’t publish them “within ATLAS” at all, instead releasing them independently. The results I reported on are special because they involved a physics paper and a technical paper, both within the ATLAS collaboration process. Instead of just working with partial or simplified data, they wanted to demonstrate the method on a “full analysis”, with all the computation and human coordination that requires. Normally, ATLAS wouldn’t go through the whole process of publishing a physics paper without basing it on new data, but this was different: the method had the potential to be so powerful that the more precise results would be worth stating as physics results alone.

(Also, for the people in the comments worried about training a model on old data: that’s not what they did. In physics, they don’t try to train a neural network model to predict the results of colliders, such a model wouldn’t tell us anything useful. They run colliders to tell us whether what they see matches the analytic, Standard, model. The neural network is trained to predict not what the experiment will say, but what the Standard Model will say, as we can usually only figure that out through time-consuming simulations. So it’s trained on (new) simulations, not on experimental data.)

Finally, on Friday I had a piece in Physics Today about the European Strategy for Particle Physics (or ESPP), and in particular, plans for the next big collider.

Before I even started working on this piece, I saw a thread by Patrick Koppenburg on some of the 263 documents submitted for the ESPP update. While my piece ended up mostly focused on the big circular collider plan that most of the field is converging on (the future circular collider, or FCC), Koppenburg’s thread was more wide-ranging, meant to illustrate the breadth of ideas under discussion. Some of that discussion is about the LHC’s current plans, like its “high-luminosity” upgrade that will see it gather data at much higher rates up until 2040. Some of it is assessing broader concerns, which it may surprise some of you to learn includes sustainability: yes, there are more or less sustainable ways to build giant colliders.

The most fun part of the discussion, though, concerns all of the other collider proposals.

Some report progress on new technologies. Muon colliders are the most famous of these, but there are other proposals that would specifically help with a linear collider. I never did end up understanding what Cooled Copper Colliders are all about, beyond that they let you get more energy in a smaller machine without super-cooling. If you know about them, chime in in the comments! Meanwhile, plasma wakefield acceleration could accelerate electrons on a wave of plasma. This has the disadvantage that you want to collide electrons and positrons, and if you try to stick a positron in plasma it will happily annihilate with the first electron it meets. So what do you do? You go half-and-half, with the HALHF project: speed up the electron with a plasma wakefield, accelerate the positron normally, and have them meet in the middle.

Others are backup plans, or “budget options”, where CERN could get a bit better measurements on some parameters if they can’t stir up the funding to measure the things they really want. They could put electrons and positrons into the LHC tunnel instead of building a new one, for a weaker machine that could still study the Higgs boson to some extent. They could use a similar experiment to produce Z bosons instead, which could serve as a bridge to a different collider project. Or, they could collider the LHC’s proton beam with an electron beam, for an experiment that mixes advantages and disadvantages of some of the other approaches.

While working on the piece, one resource I found invaluable was this colloquium talk by Tristan du Pree, where he goes through the 263 submissions and digs up a lot of interesting numbers and commentary. Read the slides for quotes from the different national inputs and “solo inputs” with comments from particular senior scientists. I used that talk to get a broad impression of what the community was feeling, and it was interesting how well it was reflected in the people I interviewed. The physicist based in Switzerland felt the most urgency for the FCC plan, while the Dutch sources were more cautious, with other Europeans firmly in the middle.

Going over the FCC report itself, one thing I decided to leave out of the discussion was the cost-benefit analysis. There’s the potential for a cute sound-bite there, “see, the collider is net positive!”, but I’m pretty skeptical of the kind of analysis they’re doing there, even if it is standard practice for government projects. Between the biggest benefits listed being industrial benefits to suppliers and early-career researcher training (is a collider unusually good for either of those things, compared to other ways we spend money?) and the fact that about 10% of the benefit is the science itself (where could one possibly get a number like that?), it feels like whatever reasoning is behind this is probably the kind of thing that makes rigor-minded economists wince. I wasn’t able to track down the full calculation though, so I really don’t know, maybe this makes more sense than it looks.

I think a stronger argument than anything along those lines is a much more basic point, about expertise. Right now, we have a community of people trying to do something that is not merely difficult, but fundamental. This isn’t like sending people to space, where many of the engineering concerns will go away when we can send robots instead. This is fundamental engineering progress in how to manipulate the forces of nature (extremely powerful magnets, high voltages) and process huge streams of data. Pushing those technologies to the limit seems like it’s going to be relevant, almost no matter what we end up doing. That’s still not putting the science first and foremost, but it feels a bit closer to an honest appraisal of what good projects like this do for the world.

Experiments Should Be Surprising, but Not Too Surprising

People are talking about colliders again.

This year, the European particle physics community is updating its shared plan for the future, the European Strategy for Particle Physics. A raft of proposals at the end of March stirred up a tail of public debate, focused on asking what sort of new particle collider should be built, and discussing potential reasons why.

That discussion, in turn, has got me thinking about experiments, and how they’re justified.

The purpose of experiments, and of science in general, is to learn something new. The more sure we are of something, the less reason there is to test it. Scientists don’t check whether the Sun rises every day. Like everyone else, they assume it will rise, and use that knowledge to learn other things.

You want your experiment to surprise you. But to design an experiment to surprise you, you run into a contradiction.

Suppose that every morning, you check whether the Sun rises. If it doesn’t, you will really be surprised! You’ll have made the discovery of the century! That’s a really exciting payoff, grant agencies should be lining up to pay for…

Well, is that actually likely to happen, though?

The same reasons it would be surprising if the Sun stopped rising are reasons why we shouldn’t expect the Sun to stop rising. A sunrise-checking observatory has incredibly high potential scientific reward…but an absurdly low chance of giving that reward.

Ok, so you can re-frame your experiment. You’re not hoping the Sun won’t rise, you’re observing the sunrise. You expect it to rise, almost guaranteed, so your experiment has an almost guaranteed payoff.

But what a small payoff! You saw exactly what you expected, there’s no science in that!

By either criterion, the “does the Sun rise” observatory is a stupid experiment. Real experiments operate in between the two extremes. They also mix motivations. Together, that leads to some interesting tensions.

What was the purpose of the Large Hadron Collider?

There were a few things physicists were pretty sure of, when they planned the LHC. Previous colliders had measured W bosons and Z bosons, and their properties made it clear that something was missing. If you could collide protons with enough energy, physicists were pretty sure you’d see the missing piece. Physicists had a reasonably plausible story for that missing piece, in the form of the Higgs boson. So physicists could be pretty sure they’d see something, and reasonably sure it would be the Higgs boson.

If physicists expected the Higgs boson, what was the point of the experiment?

First, physicists expected to see the Higgs boson, but they didn’t expect it to have the mass that it did. In fact, they didn’t know anything about the particle’s mass, besides that it should be low enough that the collider could produce it, and high enough that it hadn’t been detected before. The specific number? That was a surprise, and an almost-inevitable one. A rare creature, an almost-guaranteed scientific payoff.

I say almost, because there was a second point. The Higgs boson didn’t have to be there. In fact, it didn’t have to exist at all. There was a much bigger potential payoff, of noticing something very strange, something much more complicated than the straightforward theory most physicists had expected.

(Many people also argued for another almost-guaranteed payoff, and that got a lot more press. People talked about finding the origin of dark matter by discovering supersymmetric particles, which they argued was almost guaranteed due to a principle called naturalness. This is very important for understanding the history…but it’s an argument that many people feel has failed, and that isn’t showing up much anymore. So for this post, I’ll leave it to the side.)

This mix, of a guaranteed small surprise and the potential for a very large surprise, was a big part of what made the LHC make sense. The mix has changed a bit for people considering a new collider, and it’s making for a rougher conversation.

Like the LHC, most of the new collider proposals have a guaranteed payoff. The LHC could measure the mass of the Higgs, these new colliders will measure its “couplings”: how strongly it influences other particles and forces.

Unlike the LHC, though, this guarantee is not a guaranteed surprise. Before building the LHC, we did not know the mass of the Higgs, and we could not predict it. On the other hand, now we absolutely can predict the couplings of the Higgs. We have quite precise numbers, our expectation for what they should be based on a theory that so far has proven quite successful.

We aren’t certain, of course, just like physicists weren’t certain before. The Higgs boson might have many surprising properties, things that contradict our current best theory and usher in something new. These surprises could genuinely tell us something about some of the big questions, from the nature of dark matter to the universe’s balance of matter and antimatter to the stability of the laws of physics.

But of course, they also might not. We no longer have that rare creature, a guaranteed mild surprise, to hedge in case the big surprises fail. We have guaranteed observations, and experimenters will happily tell you about them…but no guaranteed surprises.

That’s a strange position to be in. And I’m not sure physicists have figured out what to do about it.

Hot Things Are Less Useful

Did you know that particle colliders have to cool down their particle beams before they collide?

You might have learned in school that temperature is secretly energy. With a number called Boltzmann’s constant, you can convert a temperature of a gas in Kelvin to the average energy of a molecule in the gas. If that’s what you remember about temperature, it might seem weird that someone would cool down the particles in a particle collider. The whole point of a particle collider is to accelerate particles, giving them lots of energy, before colliding them together. Since those particles have a lot of energy, they must be very hot, right?

Well, no. Here’s the thing: temperature is not just the average energy. It’s the average random energy. It’s energy that might be used to make a particle move forward or backwards, up or down, a random different motion for each particle. It doesn’t include motion that’s the same for each particle, like the movement of a particle beam.

Cooling down a particle beam then, doesn’t mean slowing it down. Rather, it means making it more consistent, getting the different particles moving in the same direction rather than randomly spreading apart. You want the particles to go somewhere specific, speeding up and slamming into the other beam. You don’t want them to move randomly, running into the walls and destroying your collider. So you can have something with high energy that is comparatively cool.

In general, the best way I’ve found to think about temperature and heat is in terms of usefulness and uselessness. Cool things are useful, they do what you expect and not much more. Hot things are less useful, they use energy to do random things you don’t want. Sometimes, by chance, this random energy will still do something useful, and if you have a cold thing to pair with the hot thing, you can take advantage of this in a consistent way. But hot things by themselves are less useful, and that’s why particle colliders try to cool down their beams.

How Small Scales Can Matter for Large Scales

For a certain type of physicist, nothing matters more than finding the ultimate laws of nature for its tiniest building-blocks, the rules that govern quantum gravity and tell us where the other laws of physics come from. But because they know very little about those laws at this point, they can predict almost nothing about observations on the larger distance scales we can actually measure.

“Almost nothing” isn’t nothing, though. Theoretical physicists don’t know nature’s ultimate laws. But some things about them can be reasonably guessed. The ultimate laws should include a theory of quantum gravity. They should explain at least some of what we see in particle physics now, explaining why different particles have different masses in terms of a simpler theory. And they should “make sense”, respecting cause and effect, the laws of probability, and Einstein’s overall picture of space and time.

All of these are assumptions, of course. Further assumptions are needed to derive any testable consequences from them. But a few communities in theoretical physics are willing to take the plunge, and see what consequences their assumptions have.

First, there’s the Swampland. String theorists posit that the world has extra dimensions, which can be curled up in a variety of ways to hide from view, with different observable consequences depending on how the dimensions are curled up. This list of different observable consequences is referred to as the Landscape of possibilities. Based on that, some string theorists coined the term “Swampland” to represent an area outside the Landscape, containing observations that are incompatible with quantum gravity altogether, and tried to figure out what those observations would be.

In principle, the Swampland includes the work of all the other communities on this list, since a theory of quantum gravity ought to be consistent with other principles as well. In practice, people who use the term focus on consequences of gravity in particular. The earliest such ideas argued from thought experiments with black holes, finding results that seemed to demand that gravity be the weakest force for at least one type of particle. Later researchers would more frequently use string theory as an example, looking at what kinds of constructions people had been able to make in the Landscape to guess what might lie outside of it. They’ve used this to argue that dark energy might be temporary, and to try to figure out what traits new particles might have.

Second, I should mention naturalness. When talking about naturalness, people often use the analogy of a pen balanced on its tip. While possible in principle, it must have been set up almost perfectly, since any small imbalance would cause it to topple, and that perfection demands an explanation. Similarly, in particle physics, things like the mass of the Higgs boson and the strength of dark energy seem to be carefully balanced, so that a small change in how they were set up would lead to a much heavier Higgs boson or much stronger dark energy. The need for an explanation for the Higgs’ careful balance is why many physicists expected the Large Hadron Collider to discover additional new particles.

As I’ve argued before, this kind of argument rests on assumptions about the fundamental laws of physics. It assumes that the fundamental laws explain the mass of the Higgs, not merely by giving it an arbitrary number but by showing how that number comes from a non-arbitrary physical process. It also assumes that we understand well how physical processes like that work, and what kinds of numbers they can give. That’s why I think of naturalness as a type of argument, much like the Swampland, that uses the smallest scales to constrain larger ones.

Third is a host of constraints that usually go together: causality, unitarity, and positivity. Causality comes from cause and effect in a relativistic universe. Because two distant events can appear to happen in different orders depending on how fast you’re going, any way to send signals faster than light is also a way to send signals back in time, causing all of the paradoxes familiar from science fiction. Unitarity comes from quantum mechanics. If quantum calculations are supposed to give the probability of things happening, those probabilities should make sense as probabilities: for example, they should never go above one.

You might guess that almost any theory would satisfy these constraints. But if you extend a theory to the smallest scales, some theories that otherwise seem sensible end up failing this test. Actually linking things up takes other conjectures about the mathematical form theories can have, conjectures that seem more solid than the ones underlying Swampland and naturalness constraints but that still can’t be conclusively proven. If you trust the conjectures, you can derive restrictions, often called positivity constraints when they demand that some set of observations is positive. There has been a renaissance in this kind of research over the last few years, including arguments that certain speculative theories of gravity can’t actually work.

A Tale of Two Experiments

Before I begin, two small announcements:

First: I am now on bluesky! Instead of having a separate link in the top menu for each social media account, I’ve changed the format so now there are social media buttons in the right-hand sidebar, right under the “Follow” button. Currently, they cover tumblr, twitter, and bluesky, but there may be more in future.

Second, I’ve put a bit more technical advice on my “Open Source Grant Proposal” post, so people interested in proposing similar research can have some ideas about how best to pitch it.

Now, on to the post:


Gravitational wave telescopes are possibly the most exciting research program in physics right now. Big, expensive machines with more on the way in the coming decades, gravitational wave telescopes need both precise theoretical predictions and high-quality data analysis. For some, gravitational wave telescopes have the potential to reveal genuinely new physics, to probe deviations from general relativity that might be related to phenomena like dark matter, though so far no such deviations have been conclusively observed. In the meantime, they’re teaching us new consequences of known physics. For example, the unusual population of black holes observed by LIGO has motivated those who model star clusters to consider processes in which the motion of three stars or black holes is related to each other, discovering that these processes are more important than expected.

Particle colliders are probably still exciting to the general public, but for many there is a growing sense of fatigue and disillusionment. Current machines like the LHC are big and expensive, and proposed future colliders would be even costlier and take decades to come online, in addition to requiring a huge amount of effort from the community in terms of precise theoretical predictions and data analysis. Some argue that colliders still might uncover genuinely new physics, deviations from the standard model that might explain phenomena like dark matter, but as no such deviations have yet been conclusively observed people are increasingly skeptical. In the meantime, most people working on collider physics are focused on learning new consequences of known physics. For example, by comparing observed results with theoretical approximations, people have found that certain high-energy processes usually left out of calculations are actually needed to get a good agreement with the data, showing that these processes are more important than expected.

…ok, you see what I did there, right? Was that fair?

There are a few key differences, with implications to keep in mind:

First, collider physics is significantly more expensive than gravitational wave physics. LIGO took about $300 million to build and spends about $50 million a year. The LHC took about $5 billion to build and costs $1 billion a year to run. That cost still puts both well below several other government expenses that you probably consider frivolous (please don’t start arguing about which ones in the comments!), but it does mean collider physics demands a bit of a stronger argument.

Second, the theoretical motivation to expect new fundamental physics out of LIGO is generally considered much weaker than for colliders. A large part of the theoretical physics community thought that they had a good argument why they should see something new at the LHC. In contrast, most theorists have been skeptical of the kinds of modified gravity theories that have dramatic enough effects that one could measure them with gravitational wave telescopes, with many of these theories having other pathologies or inconsistencies that made people wary.

Third, the general public finds astrophysics cooler than particle physics. Somehow, telling people “pairs of black holes collide more often than we thought because sometimes a third star in the neighborhood nudges them together” gets people much more excited than “pairs of quarks collide more often than we thought because we need to re-sum large logarithms differently”, even though I don’t think there’s a real “principled” difference between them. Neither reveals new laws of nature, both are upgrades to our ability to model how real physical objects behave, neither is useful to know for anybody living on Earth in the present day.

With all this in mind, my advice to gravitational wave physicists is to try, as much as possible, not to lean on stories about dark matter and modified gravity. You might learn something, and it’s worth occasionally mentioning that. But if you don’t, you run a serious risk of disappointing people. And you have such a big PR advantage if you just lean on new consequences of bog standard GR, that those guys really should get the bulk of the news coverage if you want to keep the public on your side.

The Machine Learning for Physics Recipe

Last week, I went to a conference on machine learning for physics. Machine learning covers a huge variety of methods and ideas, several of which were on full display. But again and again, I noticed a pattern. The people who seemed to be making the best use of machine learning, the ones who were the most confident in their conclusions and getting the most impressive results, the ones who felt like they had a whole assembly line instead of just a prototype, all of them were doing essentially the same thing.

This post is about that thing. If you want to do machine learning in physics, these are the situations where you’re most likely to see a benefit. You can do other things, and they may work too. But this recipe seems to work over and over again.

First, you need simulations, and you need an experiment.

Your experiment gives you data, and that data isn’t easy to interpret. Maybe you’ve embedded a bunch of cameras in the antarctic ice, and your data tells you when they trigger and how bright the light is. Maybe you’ve surrounded a particle collision with layers silicon, and your data tells you how much electric charge the different layers absorb. Maybe you’ve got an array of telescopes focused on a black hole far far away, and your data are pixels gathered from each telescope.

You want to infer, from your data, what happened physically. Your cameras in the ice saw signs of a neutrino, you want to know how much energy it had and where it was coming from. Your silicon is absorbing particles, what kind are they and what processes did they come from? The black hole might have the rings predicted by general relativity, but it might have weirder rings from a variant theory.

In each case, you can’t just calculate the answer you need. The neutrino streams past, interacting with the ice and camera positions in unpredictable ways. People can write down clean approximations for particles in the highest-energy part of a collision, but once they start cooling down the process becomes so messy that no straightforward formula describes them. Your array of telescopes fuzz and pixellate and have to be assembled together in a complicated way, so that there is no one guaranteed answer you can find to establish what they saw.

In each case, though, you can use simulations. If you specify in advance the energy and path of the neutrino, you can use a computer to predict how much light your cameras should see. If you know what particles you started with, you can run sophisticated particle physics code to see what “showers” of particles you eventually find. If you have the original black hole image, you can fuzz and pixellate and take it apart to match what your array of telescopes will do.

The problem is, for the experiments, you can’t anticipate, and you don’t know in advance. And simulations, while cheaper than experiments, aren’t cheap. You can’t run a simulation for every possible input and then check them against the experiments. You need to fill in the gaps, run some simulations and then use some theory, some statistical method or human-tweaked guess, to figure out how to interpret your experiments.

Or, you can use Machine Learning. You train a machine learning model, one well-suited the task (anything from the old standby of boosted decision trees to an old fad of normalizing flows to the latest hotness of graph neural networks). You run a bunch of simulations, as many as you can reasonably afford, and you use that data for training, making a program that matches the input data you want to find with its simulated results. This program will be less reliable than your simulations, but it will run much faster. If it’s reliable enough, you can use it instead of the old human-made guesses and tweaks. You now have an efficient, reliable way to go from your raw experiment data to the physical questions you actually care about.

Crucially, each of the elements in this recipe is essential.

You need a simulation. If you just have an experiment with no simulation, then you don’t have a way to interpret the results, and training a machine to reproduce the experiment won’t tell you anything new.

You need an experiment. If you just have simulations, training a machine to reproduce them also doesn’t tell you anything new. You need some reason to want to predict the results of the simulations, beyond just seeing what happens in between which the machine can’t tell you.

And you need to not have anything better than the simulation. If you have a theory where you can write out formulas for what happens then you don’t need machine learning, you can interpret the experiments more easily without it. This applies if you’ve carefully designed your experiment to measure something easy to interpret, like the ratio of rates of two processes that should be exactly the same.

These aren’t the only things you need. You also need to do the whole thing carefully enough that you understand well your uncertainties, not just what the machine predicts but how often it gets it wrong, and whether it’s likely to do something strange when you use it on the actual experiment. But if you can do that, you have a reliable recipe, one many people have followed successfully before. You have a good chance of making things work.

This isn’t the only way physicists can use machine learning. There are people looking into something more akin to what’s called unsupervised learning, where you look for strange events in your data as clues for what to investigate further. And there are people like me, trying to use machine learning on the mathematical side, to guess new formulas and new heuristics. There is likely promise in many of these approaches. But for now, they aren’t a recipe.

HAMLET-Physics 2024

Back in January, I announced I was leaving France and leaving academia. Since then, it hasn’t made much sense for me to go to conferences, even the big conference of my sub-field or the conference I organized.

I did go to a conference this week, though. I had two excuses:

  1. The conference was here in Copenhagen, so no travel required.
  2. The conference was about machine learning.

HAMLET-Physics, or How to Apply Machine Learning to Experimental and Theoretical Physics, had the additional advantage of having an amusing acronym. Thanks to generous support by Carlsberg and the Danish Data Science Academy, they could back up their choice by taking everyone on a tour of Kronborg (better known in the English-speaking world as Elsinore).

This conference’s purpose was to bring together physicists who use machine learning, machine learning-ists who might have something useful to say to those physicists, and other physicists who don’t use machine learning yet but have a sneaking suspicion they might have to at some point. As a result, the conference was super-interdisciplinary, with talks by people addressing very different problems with very different methods.

Interdisciplinary conferences are tricky. It’s easy for the different groups of people to just talk past each other: everyone shows up, gives the same talk they always do, socializes with the same friends they always meet, then leaves.

There were a few talks that hit that mold, and were so technical only a few people understood. But most were better. The majority of the speakers did really well at presenting their work in a way that would be understandable and even exciting to people outside their field, while still having enough detail that we all learned something. I was particularly impressed by Thea Aarestad’s keynote talk on Tuesday, a really engaging view of how machine learning can be used under the extremely tight time constraints LHC experiments need to decide whether to record incoming data.

For the social aspect, the organizers had a cute/gimmicky/machine-learning-themed solution. Based on short descriptions and our public research profiles, they clustered attendees, plotting the connections between them. They then used ChatGPT to write conversation prompts between any two people on the basis of their shared interests. In practice, this turned out to be amusing but totally unnecessary. We were drawn to speak to each other not by conversation prompts, but by a drive to learn from each other. “Why do you do it that way?” was a powerful conversation-starter, as was “what’s the best way to do this?” Despite the different fields, the shared methodologies gave us strong reasons to talk, and meant that people were very rarely motivated to pick one of ChatGPT’s “suggestions”.

Overall, I got a better feeling for how machine learning is useful in physics (and am planning a post on that in future). I also got some fresh ideas for what to do myself, and a bit of a picture of what the future holds in store.

(Not At) Amplitudes 2024 at the IAS

For over a decade, I studied scattering amplitudes, the formulas particle physicists use to find the probability that particles collide, or scatter, in different ways. I went to Amplitudes, the field’s big yearly conference, every year from 2015 to 2023.

This year is different. I’m on the way out of the field, looking for my next steps. Meanwhile, Amplitudes 2024 is going full speed ahead at the Institute for Advanced Study in Princeton.

With poster art that is, as the kids probably don’t say anymore, “on fleek”

The talks aren’t live-streamed this year, but they are posting slides, and they will be posting recordings. Since a few of my readers are interested in new amplitudes developments, I’ve been paging through the posted slides looking for interesting highlights. So far, I’ve only seen slides from the first few days: I will probably write about the later talks in a future post.

Each day of Amplitudes this year has two 45-minute “review talks”, one first thing in the morning and the other first thing after lunch. I put “review talks” in quotes because they vary a lot, between talks that try to introduce a topic for the rest of the conference to talks that mostly focus on the speaker’s own research. Lorenzo Tancredi’s talk was of the former type, an introduction to the many steps that go into making predictions for the LHC, with a focus on those topics where amplitudeologists have made progress. The talk opens with the type of motivation I’d been writing in grant and job applications over the last few years (we don’t know most of the properties of the Higgs yet! To measure them, we’ll need to calculate amplitudes with massive particles to high precision!), before moving into a review of the challenges and approaches in different steps of these calculations. While Tancredi apologizes in advance that the talk may be biased, I found it surprisingly complete: if you want to get an idea of the current state of the “LHC amplitudes pipeline”, his slides are a good place to start.

Tancredi’s talk serves as introduction for a variety of LHC-focused talks, some later that day and some later in the week. Federica Devoto discussed high-energy quarks while Chiara Signorile-Signorile and George Sterman showed advances in handling of low-energy particles. Xiaofeng Xu has a program that helps predict symbol letters, the building-blocks of scattering amplitudes that can be used to reconstruct or build up the whole thing, while Samuel Abreu talked about a tricky state-of-the-art case where Xu’s program misses part of the answer.

Later Monday morning veered away from the LHC to focus on more toy-model theories. Renata Kallosh’s talk in particular caught my attention. This blog is named after a long-standing question in amplitudes: will the four-graviton amplitude in N=8 supergravity diverge at seven loops in four dimensions? This seemingly arcane question is deep down a question about what is actually required for a successful theory of quantum gravity, and in particular whether some of the virtues of string theory can be captured by a simpler theory instead. Answering the question requires a prodigious calculation, and the more “loops” are involved the more difficult it is. Six years ago, the calculation got to five loops, and it hasn’t passed that mark since then. That five-loop calculation gave some reason for pessimism, a nice pattern at lower loops that stopped applying at five.

Kallosh thinks she has an idea of what to expect. She’s noticed a symmetry in supergravity, one that hadn’t previously been taken into account. She thinks that symmetry should keep N=8 supergravity from diverging on schedule…but only in exactly four dimensions. All of the lower-loop calculations in N=8 supergravity diverged in higher dimensions than four, and it seems like with this new symmetry she understands why. Her suggestion is to focus on other four-dimensional calculations. If seven loops is still too hard, then dialing back the amount of supersymmetry from N=8 to something lower should let her confirm her suspicions. Already a while back N=5 supergravity was found to diverge later than expected in four dimensions. She wants to know whether that pattern continues.

(Her backup slides also have a fun historical point: in dimensions greater than four, you can’t get elliptical planetary orbits. So four dimensions is special for our style of life.)

Other talks on Monday included a talk by Zahra Zahraee on progress towards “solving” the field’s favorite toy model, N=4 super Yang-Mills. Christian Copetti talked about the work I mentioned here, while Meta employee François Charlton’s “review talk” dealt with his work applying machine learning techniques to “translate” between questions in mathematics and their answers. In particular, he reported progress with my current boss Matthias Wilhelm and frequent collaborator and mentor Lance Dixon on using transformers to guess high-loop formulas in N=4 super Yang-Mills. They have an interesting proof of principle now, but it will probably still be a while until they can use the method to predict something beyond the state of the art.

In the meantime at least they have some hilarious AI-generated images

Tuesday’s review by Ian Moult was genuinely a review, but of a topic not otherwise covered at the conference, that of “detector observables”. The idea is that rather than talking about which individual particles are detected, one can ask questions that make more sense in terms of the experimental setup, like asking about the amounts of energy deposited in different detectors. This type of story has gone from an idle observation by theorists to a full research program, with theorists and experimentalists in active dialogue.

Natalia Toro brought up that, while we say each particle has a definite spin, that may not actually be the case. Particles with so-called “continuous spins” can masquerade as particles with a definite integer spin at lower energies. Toro and Schuster promoted this view of particles ten years ago, but now can make a bit more sense of it, including understanding how continuous-spin particles can interact.

The rest of Tuesday continued to be a bit of a grab-bag. Yael Shadmi talked about applying amplitudes techniques to Effective Field Theory calculations, while Franziska Porkert talked about a Feynman diagram involving two different elliptic curves. Interestingly (well, to me at least), the curves never appear “together”, you can represent the diagram as a sum of terms involving one curve and terms involving the other, much simpler than it could have been!

Tuesday afternoon’s review talk by Iain Stewart was one of those “guest from an adjacent field” talks, in this case from an approach called SCET, and at first glance didn’t seem to do much to reach out to the non-SCET people in the audience. Frequent past collaborator of mine Andrew McLeod showed off a new set of relations between singularities of amplitudes, found by digging in to the structure of the equations discovered by Landau that control this behavior. He and his collaborators are proposing a new way to keep track of these things involving “minimal cuts”, a clear pun on the “maximal cuts” that have been of great use to other parts of the community. Whether this has more or less staying power than “negative geometries” remains to be seen.

Closing Tuesday, Shruti Paranjape showed there was more to discover about the simplest amplitudes, called “tree amplitudes”. By asking why these amplitudes are sometimes equal to zero, she was able to draw a connection to the “double-copy” structure that links the theory of the strong force and the theory of gravity. Johannes Henn’s talk noticed an intriguing pattern. A while back, I had looked into under which circumstances amplitudes were positive. Henn found that “positive” is an understatement. In a certain region, the amplitudes we were looking at turn out to not just be positive, but also always decreasing, and also with second derivative always positive. In fact, the derivatives appear to alternate, always with one sign or the other as one takes more derivatives. Henn is calling this unusual property “completely monotonous”, and trying to figure out how widely it holds.

Wednesday had a more mathematical theme. Bernd Sturmfels began with a “review talk” that largely focused on his own work on the space of curves with marked points, including a surprising analogy between amplitudes and the likelihood functions one needs to minimize in machine learning. Lauren Williams was the other “actual mathematician” of the day, and covered her work on various topics related to the amplituhedron.

The remaining talks on Wednesday were not literally by mathematicians, but were “mathematically informed”. Carolina Figueiredo and Hayden Lee talked about work with Nima Arkani-Hamed on different projects. Figueiredo’s talk covered recent developments in the “curve integral formalism”, a recent step in Nima’s quest to geometrize everything in sight, this time in the context of more realistic theories. The talk, which like those Nima gives used tablet-written slides, described new insights one can gain from this picture, including new pictures of how more complicated amplitudes can be built up of simpler ones. If you want to understand the curve integral formalism further, I’d actually suggest instead looking at Mark Spradlin’s slides from later that day. The second part of Spradlin’s talk dealt with an area Figueiredo marked for future research, including fermions in the curve integral picture. I confess I’m still not entirely sure what the curve integral formalism is good for, but Spradlin’s talk gave me a better idea of what it’s doing. (The first part of his talk was on a different topic, exploring the space of string-like amplitudes to figure out which ones are actually consistent.)

Hayden Lee’s talk mentions the emergence of time, but the actual story is a bit more technical. Lee and collaborators are looking at cosmological correlators, observables like scattering amplitudes but for cosmology. Evaluating these is challenging with standard techniques, but can be approached with some novel diagram-based rules which let the results be described in terms of the measurable quantities at the end in a kind of “amplituhedron-esque” way.

Aidan Herderschee and Mariana Carrillo González had talks on Wednesday on ways of dealing with curved space. Herderschee talked about how various amplitudes techniques need to be changed to deal with amplitudes in anti-de-Sitter space, with difference equations replacing differential equations and sum-by-parts relations replacing integration-by-parts relations. Carrillo González looked at curved space through the lens of a special kind of toy model theory called a self-dual theory, which allowed her to do cosmology-related calculations using a double-copy technique.

Finally, Stephen Sharpe had the second review talk on Wednesday. This was another “outside guest” talk, a discussion from someone who does Lattice QCD about how they have been using their methods to calculate scattering amplitudes. They seem to count the number of particles a bit differently than we do, I’m curious whether this came up in the question session.

The Hidden Higgs

Peter Higgs, the theoretical physicist whose name graces the Higgs boson, died this week.

Peter Higgs, after the Higgs boson discovery was confirmed

This post isn’t an obituary: you can find plenty of those online, and I don’t have anything special to say that others haven’t. Reading the obituaries, you’ll notice they summarize Higgs’s contribution in different ways. Higgs was one of the people who proposed what today is known as the Higgs mechanism, the principle by which most (perhaps all) elementary particles gain their mass. He wasn’t the only one: Robert Brout and François Englert proposed essentially the same idea in a paper that was published two months earlier, in August 1964. Two other teams came up with the idea slightly later than that: Gerald Guralnik, Carl Richard Hagen, and Tom Kibble were published one month after Higgs, while Alexander Migdal and Alexander Polyakov found the idea independently in 1965 but couldn’t get it published till 1966.

Higgs did, however, do something that Brout and Englert didn’t. His paper doesn’t just propose a mechanism, involving a field which gives particles mass. It also proposes a particle one could discover as a result. Read the more detailed obituaries, and you’ll discover that this particle was not in the original paper: Higgs’s paper was rejected at first, and he added the discussion of the particle to make it more interesting.

At this point, I bet some of you are wondering what the big deal was. You’ve heard me say that particles are ripples in quantum fields. So shouldn’t we expect every field to have a particle?

Tell that to the other three Higgs bosons.

Electromagnetism has one type of charge, with two signs: plus, and minus. There are electrons, with negative charge, and their anti-particles, positrons, with positive charge.

Quarks have three types of charge, called colors: red, green, and blue. Each of these also has two “signs”: red and anti-red, green and anti-green, and blue and anti-blue. So for each type of quark (like an up quark), there are six different versions: red, green, and blue, and anti-quarks with anti-red, anti-green, and anti-blue.

Diagram of the colors of quarks

When we talk about quarks, we say that the force under which they are charged, the strong nuclear force, is an “SU(3)” force. The “S” and “U” there are shorthand for mathematical properties that are a bit too complicated to explain here, but the “(3)” is quite simple: it means there are three colors.

The Higgs boson’s primary role is to make the weak nuclear force weak, by making the particles that carry it from place to place massive. (That way, it takes too much energy for them to go anywhere, a feeling I think we can all relate to.) The weak nuclear force is an “SU(2)” force. So there should be two “colors” of particles that interact with the weak nuclear force…which includes Higgs bosons. For each, there should also be an anti-color, just like the quarks had anti-red, anti-green, and anti-blue. So we need two “colors” of Higgs bosons, and two “anti-colors”, for a total of four!

But the Higgs boson discovered at the LHC was a neutral particle. It didn’t have any electric charge, or any color. There was only one, not four. So what happened to the other three Higgs bosons?

The real answer is subtle, one of those physics things that’s tricky to concisely explain. But a partial answer is that they’re indistinguishable from the W and Z bosons.

Normally, the fundamental forces have transverse waves, with two polarizations. Light can wiggle along its path back and forth, or up and down, but it can’t wiggle forward and backward. A fundamental force with massive particles is different, because they can have longitudinal waves: they have an extra direction in which they can wiggle. There are two W bosons (plus and minus) and one Z boson, and they all get one more polarization when they become massive due to the Higgs.

That’s three new ways the W and Z bosons can wiggle. That’s the same number as the number of Higgs bosons that went away, and that’s no coincidence. We physicist like to say that the W and Z bosons “ate” the extra Higgs, which is evocative but may sound mysterious. Instead, you can think of it as the two wiggles being secretly the same, mixing together in a way that makes them impossible to tell apart.

The “count”, of how many wiggles exist, stays the same. You start with four Higgs wiggles, and two wiggles each for the precursors of the W+, W-, and Z bosons, giving ten. You end up with one Higgs wiggle, and three wiggles each for the W+, W-, and Z bosons, which still adds up to ten. But which fields match with which wiggles, and thus which particles we can detect, changes. It takes some thought to look at the whole system and figure out, for each field, what kind of particle you might find.

Higgs did that work. And now, we call it the Higgs boson.

What Are Particles? The Gentle Introduction

On this blog, I write about particle physics for the general public. I try to make things as simple as possible, but I do have to assume some things. In particular, I usually assume you know what particles are!

This time, I won’t do that. I know some people out there don’t know what a particle is, or what particle physicists do. If you’re a person like that, this post is for you! I’m going to give a gentle introduction to what particle physics is all about.

Let’s start with atoms.

Every object and substance around you, everything you can touch or lift or walk on, the water you drink and the air you breathe, all of these are made up of atoms. Some are simple: an iron bar is made of Iron atoms, aluminum foil is mostly Aluminum atoms. Some are made of combinations of atoms into molecules, like water’s famous H2O: each molecule has two Hydrogen atoms and one Oxygen atom. Some are made of more complicated mixtures: air is mostly pairs of Nitrogen atoms, with a healthy amount of pairs of Oxygen, some Carbon Dioxide (CO2), and many other things, while the concrete sidewalks you walk on have Calcium, Silicon, Aluminum, Iron, and Oxygen, all combined in various ways.

There is a dizzying array of different types of atoms, called chemical elements. Most occur in nature, but some are man-made, created by cutting-edge nuclear physics. They can all be organized in the periodic table of elements, which you’ve probably seen on a classroom wall.

The periodic table

The periodic table is called the periodic table because it repeats, periodically. Each element is different, but their properties resemble each other. Oxygen is a gas, Sulfur a yellow powder, Polonium an extremely radioactive metal…but just as you can find H2O, you can make H2S, and even H2Po. The elements get heavier as you go down the table, and more metal-like, but their chemical properties, the kinds of molecules you can make with them, repeat.

Around 1900, physicists started figuring out why the elements repeat. What they discovered is that each atom is made of smaller building-blocks, called sub-atomic particles. (“Sub-atomic” because they’re smaller than atoms!) Each atom has electrons on the outside, and on the inside has a nucleus made of protons and neutrons. Atoms of different elements have different numbers of protons and electrons, which explains their different properties.

Different atoms with different numbers of protons, neutrons, and electrons

Around the same time, other physicists studied electricity, magnetism, and light. These things aren’t made up of atoms, but it was discovered that they are all aspects of the same force, the electromagnetic force. And starting with Einstein, physicists figured out that this force has particles too. A beam of light is made up of another type of sub-atomic particle, called a photon.

For a little while then, it seemed that the universe was beautifully simple. All of matter was made of electrons, protons, and neutrons, while light was made of photons.

(There’s also gravity, of course. That’s more complicated, in this post I’ll leave it out.)

Soon, though, nuclear physicists started noticing stranger things. In the 1930’s, as they tried to understand the physics behind radioactivity and mapped out rays from outer space, they found particles that didn’t fit the recipe. Over the next forty years, theoretical physicists puzzled over their equations, while experimental physicists built machines to slam protons and electrons together, all trying to figure out how they work.

Finally, in the 1970’s, physicists had a theory they thought they could trust. They called this theory the Standard Model. It organized their discoveries, and gave them equations that could predict what future experiments would see.

In the Standard Model, there are two new forces, the weak nuclear force and the strong nuclear force. Just like photons for the electromagnetic force, each of these new forces has a particle. The general word for these particles is bosons, named after Satyendra Nath Bose, a collaborator of Einstein who figured out the right equations for this type of particle. The weak force has bosons called W and Z, while the strong force has bosons called gluons. A final type of boson, called the Higgs boson after a theorist who suggested it, rounds out the picture.

The Standard Model also has new types of matter particles. Neutrinos interact with the weak nuclear force, and are so light and hard to catch that they pass through nearly everything. Quarks are inside protons and neutrons: a proton contains one one down quark and two up quarks, while a neutron contains two down quarks and one up quark. The quarks explained all of the other strange particles found in nuclear physics.

Finally, the Standard Model, like the periodic table, repeats. There are three generations of particles. The first, with electrons, up quarks, down quarks, and one type of neutrino, show up in ordinary matter. The other generations are heavier, and not usually found in nature except in extreme conditions. The second generation has muons (similar to electrons), strange quarks, charm quarks, and a new type of neutrino called a muon-neutrino. The third generation has tauons, bottom quarks, top quarks, and tau-neutrinos.

(You can call these last quarks “truth quarks” and “beauty quarks” instead, if you like.)

Physicists had the equations, but the equations still had some unknowns. They didn’t know how heavy the new particles were, for example. Finding those unknowns took more experiments, over the next forty years. Finally, in 2012, the last unknown was found when a massive machine called the Large Hadron Collider was used to measure the Higgs boson.

The Standard Model

We think that these particles are all elementary particles. Unlike protons and neutrons, which are both made of up quarks and down quarks, we think that the particles of the Standard Model are not made up of anything else, that they really are elementary building-blocks of the universe.

We have the equations, and we’ve found all the unknowns, but there is still more to discover. We haven’t seen everything the Standard Model can do: to see some properties of the particles and check they match, we’d need a new machine, one even bigger than the Large Hadron Collider. We also know that the Standard Model is incomplete. There is at least one new particle, called dark matter, that can’t be any of the known particles. Mysteries involving the neutrinos imply another type of unknown particle. We’re also missing deeper things. There are patterns in the table, like the generations, that we can’t explain.

We don’t know if any one experiment will work, or if any one theory will prove true. So particle physicists keep working, trying to find new tricks and make new discoveries.