Tag Archives: particle physics

LHC Black Hole Reassurance: The Professional Version

A while back I wrote a post trying to reassure you that the Large Hadron Collider cannot create a black hole that could destroy the Earth. If you’re the kind of person who is worried about this kind of thing, you’ve probably heard a variety of arguments: that it hasn’t happened yet, despite the LHC running for quite some time, that it didn’t happen before the LHC with cosmic rays of comparable energy, and that a black hole that small would quickly decay due to Hawking radiation. I thought it would be nice to give a different sort of argument, a back-of-the-envelope calculation you can try out yourself, showing that even if a black hole was produced using all of the LHC’s energy and fell directly into the center of the Earth, and even if Hawking radiation didn’t exist, it would still take longer than the lifetime of the universe to cause any detectable damage. Modeling the black hole as falling through the Earth and just slurping up everything that falls into its event horizon, it wouldn’t even double in size before the stars burn out.

That calculation was extremely simple by physics standards. As it turns out, it was too simple. A friend of mine started thinking harder about the problem, and dug up this paper from 2008: Astrophysical implications of hypothetical stable TeV-scale black holes.

Before the LHC even turned on, the experts were hard at work studying precisely this question. The paper has two authors, Steve Giddings and Michelangelo Mangano. Giddings is an expert on the problem of quantum gravity, while Mangano is an expert on LHC physics, so the two are exactly the dream team you’d ask for to answer this question. Like me, they pretend that black holes don’t decay due to Hawking radiation, and pretend that one falls to straight from the LHC to the center of the Earth, for the most pessimistic possible scenario.

Unlike me, but like my friend, they point out that the Earth is not actually a uniform sphere of matter. It’s made up of particles: quarks arranged into nucleons arranged into nuclei arranged into atoms. And a black hole that hits a nucleus will probably not just slurp up an event horizon-sized chunk of the nucleus: it will slurp up the whole nucleus.

This in turn means that the black hole starts out growing much more fast. Eventually, it slows down again: once it’s bigger than an atom, it starts gobbling up atoms a few at a time until eventually it is back to slurping up a cylinder of the Earth’s material as it passes through.

But an atom-sized black hole will grow faster than an LHC-energy-sized black hole. How much faster is estimated in the Giddings and Mangano paper, and it depends on the number of dimensions. For eight dimensions, we’re safe. For fewer, they need new arguments.

Wait a minute, you might ask, aren’t there only four dimensions? Is this some string theory nonsense?

Kind of, yes. In order for the LHC to produce black holes, gravity would need to have a much stronger effect than we expect on subatomic particles. That requires something weird, and the most plausible such weirdness people considered at the time were extra dimensions. With extra dimensions of the right size, the LHC might have produced black holes. It’s that kind of scenario that Giddings and Mangano are checking: they don’t know of a plausible way for black holes to be produced at the LHC if there are just four dimensions.

For fewer than eight dimensions, though, they have a problem: the back-of-the-envelope calculation suggests black holes could actually grow fast enough to cause real damage. Here, they fall back on the other type of argument: if this could happen, would it have happened already? They argue that, if the LHC could produce black holes in this way, then cosmic rays could produce black holes when they hit super-dense astronomical objects, such as white dwarfs and neutron stars. Those black holes would eat up the white dwarfs and neutron stars, in the same way one might be worried they could eat up the Earth. But we can observe that white dwarfs and neutron stars do in fact exist, and typically live much longer than they would if they were constantly being eaten by miniature black holes. So we can conclude that any black holes like this don’t exist, and we’re safe.

If you’ve got a smattering of physics knowledge, I encourage you to read through the paper. They consider a lot of different scenarios, much more than I can summarize in a post. I don’t know if you’ll find it reassuring, since they may not cover whatever you happen to be worried about. But it’s a lot of fun seeing how the experts handle the problem.

Models, Large Language and Otherwise

In particle physics, our best model goes under the unimaginative name “Standard Model“. The Standard Model models the world in terms of interactions of different particles, or more properly quantum fields. The fields have different masses and interact with different strengths, and each mass and interaction strength is a parameter: a “free” number in the model, one we have to fix based on data. There are nineteen parameters in the Standard Model (not counting the parameters for massive neutrinos, which were discovered later).

In principle, we could propose a model with more parameters that fit the data better. With enough parameters, one can fit almost anything. That’s cheating, though, and it’s a type of cheating we know how to catch. We have statistical tests that let us estimate how impressed we should be when a model matches the data. If a model is just getting ahead on extra parameters without capturing something real, we can spot that, because it gets a worse score on those tests. A model with a bad score might match the data you used to fix its parameters, but it won’t predict future data, so it isn’t actually useful. Right now the Standard Model (plus neutrino masses) gets the best score on those tests, when fitted to all the data we have access to, so we think of it as our best and most useful model. If someone proposed a model that got a better score, we’d switch: but so far, no-one has managed.

Physicists care about this not just because a good model is useful. We think that the best model is, in some sense, how things really work. The fact that the Standard Model fits the data best doesn’t just mean we can use it to predict more data in the future: it means that somehow, deep down, that the world is made up of quantum fields the way the Standard Model describes.

If you’ve been following developments in machine learning, or AI, you might have heard the word “model” slung around. For example, GPT is a Large Language Model, or LLM for short.

Large Language Models are more like the Standard Model than you might think. Just as the Standard Model models the world in terms of interacting quantum fields, Large Language Models model the world in terms of a network of connections between artificial “neurons”. Just as particles have different interaction strengths, pairs of neurons have different connection weights. Those connection weights are the parameters of a Large Language Model, in the same way that the masses and interaction strengths of particles are the parameters of the Standard Model. The parameters for a Large Language Model are fixed by a giant corpus of text data, almost the whole internet reduced to a string of bytes that the LLM needs to match, in the same way the Standard Model needs to match data from particle collider experiments. The Standard Model has nineteen parameters, Large Language Models have billions.

Increasingly, machine learning models seem to capture things better than other types of models. If you want to know how a protein is going to fold, you can try to make a simplified model of how its atoms and molecules interact with each other…but instead, you can make your model a neural network. And that turns out to work better. If you’re a bank and you want to know how many of your clients will default on their loans, you could ask an economist to make a macroeconomic model…or, you can just make your model a neural network too.

In physics, we think that the best model is the model that is closest to reality. Clearly, though, this can’t be what’s going on here. Real proteins don’t fold based on neural networks, and neither do real economies. Both economies and folding proteins are very complicated, so any model we can use right now won’t be what’s “really going on”, unlike the comparatively simple world of particle physics. Still, it seems weird that, compared to the simplified economic or chemical models, neural networks can work better, even if they’re very obviously not really what’s going on. Is there another way to think about them?

I used to get annoyed at people using the word “AI” to refer to machine learning models. In my mind, AI was the thing that shows up in science fiction, machines that can think as well or better than humans. (The actual term of art for this is AGI, artificial general intelligence.) Machine learning, and LLMs in particular, felt like a meaningful step towards that kind of AI, but they clearly aren’t there yet.

Since then, I’ve been convinced that the term isn’t quite so annoying. The AI field isn’t called AI because they’re creating a human-equivalent sci-fi intelligence. They’re called AI because the things they build are inspired by how human intelligence works.

As humans, we model things with mathematics, but we also model them with our own brains. Consciously, we might think about objects and their places in space, about people and their motivations and actions, about canonical texts and their contents. But all of those things cash out in our neurons. Anything we think, anything we believe, any model we can actually apply by ourselves in our own lives, is a model embedded in a neural network. It’s quite a bit more complicated neural network than an LLM, but it’s very much still a kind of neural network.

Because humans are alright at modeling a variety of things, because we can see and navigate the world and persuade and manipulate each other, we know that neural networks can do these things. A human brain may not be the best model for any given phenomenon: an engineer can model the flight of a baseball with math much better than the best baseball player can with their unaided brain. But human brains still tend to be fairly good models for a wide variety of things. Evolution has selected them to be.

So with that in mind, it shouldn’t be too surprising that neural networks can model things like protein folding. Even if proteins don’t fold based on neural networks, even if the success of AlphaFold isn’t capturing the actual details of the real world the way the Standard Model does, the model is capturing something. It’s loosely capturing the way a human would think about the problem, if you gave that human all the data they needed. And humans are, and remain, pretty good at thinking! So we have reason, not rigorous, but at least intuitive reason, to think that neural networks will actually be good models of things.

A Significant Calculation

Particle physicists have a weird relationship to journals. We publish all our results for free on a website called the arXiv, and when we need to read a paper that’s the first place we look. But we still submit our work to journals, because we need some way to vouch that we’re doing good work. Explicit numbers (h-index, impact factor) are falling out of favor, but we still need to demonstrate that we get published in good journals, that we do enough work, and that work has an impact on others. We need it to get jobs, to get grants to fund research at those jobs, and to get future jobs for the students and postdocs we hire with those grants. Our employers need it to justify their own funding, to summarize their progress so governments and administrators can decide who gets what.

This can create weird tensions. When people love a topic, they want to talk about it with each other. They want to say all sorts of things, big and small, to contribute new ideas and correct others and move things forward. But as professional physicists, we also have to publish papers. We can publish some “notes”, little statements on the arXiv that we don’t plan to make into a paper, but we don’t really get “credit” for those. So in practice, we try to force anything we want to say into a paper-sized chunk.

That wouldn’t be a problem if paper-sized chunks were flexible, and you can see when journals historically tried to make them that way. Some journals publish “letters”, short pieces a few pages long, to contrast with their usual papers that can run from twenty to a few hundred pages. These “letters” tend to be viewed as prestigious, though, so they end up being judged on roughly the same standards as the normal papers, if not more strictly.

What standards are those? For each journal, you can find an official list. The Journal of High-Energy Physics, for example, instructs reviewers to look for “high scientific
quality, originality and relevance”. That rules out papers that just reproduce old results, but otherwise is frustratingly vague. What constitutes high scientific quality? Relevant to whom?

In practice, reviewers use a much fuzzier criterion: is this “paper-like”? Does this look like other things that get published, or not?

Each field will assess that differently. It’s a criterion of familiarity, of whether a paper looks like what people in the field generally publish. In my field, one rule of thumb is that a paper must contain a significant calculation.

A “significant calculation” is still quite fuzzy, but the idea is to make sure that a paper requires some amount of actual work. Someone has to do something challenging, and the work shouldn’t be half-done: as much as feasible, they should finish, and calculate something new. Ideally, this should be something that nobody had calculated before, but if the perspective is new enough it can be something old. It should “look hard”, though.

That’s a fine way to judge whether someone is working hard, which is something we sometimes want to judge. But since we’re incentivized to make everything into a paper, this means that every time we want to say something, we want to accompany it with some “significant calculation”, some concrete time-consuming work. This can happen even if we want to say something that’s quite direct and simple, a fact that can be quickly justified but nonetheless has been ignored by the field. If we don’t want it to be “just” an un-credited note, we have to find some way to turn it into a “significant calculation”. We do extra work, sometimes pointless work, in order to make something “paper-sized”.

I like to think about what academia would be like without the need to fill out a career. The model I keep imagining is that of a web forum or a blogging platform. There would be the big projects, the in-depth guides and effortposts. But there would also be shorter contributions, people building off each other, comments on longer pieces and quick alerts pinned to the top of the page. We’d have a shared record of knowledge, where everyone contributes what they want to whatever level of detail they want.

I think math is a bit closer to this ideal. Despite their higher standards for review, checking the logic of every paper to make sure it makes sense to publish, math papers can sometimes be very short, or on apparently trivial things. Physics doesn’t quite work this way, and I suspect part of it is our funding sources. If you’re mostly paid to teach, like many mathematicians, your research is more flexible. If you’re paid to research, like many physicists, then people want to make sure your research is productive, and that tends to cram it into measurable boxes.

In today’s world, I don’t think physics can shift cultures that drastically. Even as we build new structures to rival the journals, the career incentives remain. Physics couldn’t become math unless it shed most of the world’s physicists.

In the long run, though…well, we may one day find ourselves in a world where we don’t have to work all our days to keep each other alive. And if we do, hopefully we’ll change how scientists publish.

IPhT-60 Retrospective

Last week, my institute had its 60th anniversary party, which like every party in academia takes the form of a conference.

For unclear reasons, this one also included a physics-themed arcade game machine.

Going in, I knew very little about the history of the Institute of Theoretical Physics, of the CEA it’s part of (Commissariat of Atomic Energy, now Atomic and Alternative Energy), or of French physics in general, so I found the first few talks very interesting. I learned that in France in the early 1950’s, theoretical physics was quite neglected. Key developments, like relativity and statistical mechanics, were seen as “too German” due to their origins with Einstein and Boltzmann (nevermind that this was precisely why the Nazis thought they were “not German enough”), while de Broglie suppressed investigation of quantum mechanics. It took French people educated abroad to come back and jumpstart progress.

The CEA is, in a sense, the French equivalent of the some of the US’s national labs, and like them got its start as part of a national push towards nuclear weapons and nuclear power.

(Unlike the US’s national labs, the CEA is technically a private company. It’s not even a non-profit: there are for-profit components that sell services and technology to the energy industry. Never fear, my work remains strictly useless.)

My official title is Ingénieur Chercheur, research engineer. In the early days, that title was more literal. Most of the CEA’s first permanent employees didn’t have PhDs, but were hired straight out of undergraduate studies. The director, Claude Bloch, was in his 40’s, but most of the others were in their 20’s. There was apparently quite a bit of imposter syndrome back then, with very young people struggling to catch up to the global state of the art.

They did manage to catch up, though, and even excel. In the 60’s and 70’s, researchers at the institute laid the groundwork for a lot of ideas that are popular in my field at the moment. Stora’s work established a new way to think about symmetry that became the textbook approach we all learn in school, while Froissart figured out a consistency condition for high-energy physics whose consequences we’re still teasing out. Pham was another major figure at the institute in that era. With my rudimentary French I started reading his work back in Copenhagen, looking for new insights. I didn’t go nearly as fast as my partner in the reading group though, whose mastery of French and mathematics has seen him use Pham’s work in surprising new ways.

Hearing about my institute’s past, I felt a bit of pride in the physicists of the era, not just for the science they accomplished but for the tools they built to do it. This was the era of preprints, first as physical papers, orange folders mailed to lists around the world, and later online as the arXiv. Physicists here were early adopters of some aspects, though late adopters of others (they were still mailing orange folders a ways into the 90’s). They also adopted computation, with giant punch-card reading, sheets-of-output-producing computers staffed at all hours of the night. A few physicists dove deep into the new machines, and guided the others as capabilities changed and evolved, while others were mostly just annoyed by the noise!

When the institute began, scientific papers were still typed on actual typewriters, with equations handwritten in or typeset in ingenious ways. A pool of secretaries handled much of the typing, many of whom were able to come to the conference! I wonder what they felt, seeing what the institute has become since.

I also got to learn a bit about the institute’s present, and by implication its future. I saw talks covering different areas, from multiple angles on mathematical physics to simulations of large numbers of particles, quantum computing, and machine learning. I even learned a bit from talks on my own area of high-energy physics, highlighting how much one can learn from talking to new people.

Physics’ Unique Nightmare

Halloween is coming up, so let’s talk about the most prominent monster of the physics canon, the nightmare scenario.

Not to be confused with the D&D Nightmare, which once was a convenient source of infinite consumable items for mid-level characters.

Right now, thousands of physicists search for more information about particle physics beyond our current Standard Model. They look at data from the Large Hadron Collider to look for signs of new particles and unexpected behavior, they try to detect a wide range of possible dark matter particles, and they make very precise measurements to try to detect subtle deviations. And in the back of their minds, almost all of those physicists wonder if they’ll find anything at all.

It’s not that we think the Standard Model is right. We know it has problems, deep mathematical issues that make it give nonsense answers and an apparent big mismatch with what we observe about the motion of matter and light in the universe. (You’ve probably heard this mismatch called dark matter and dark energy.)

But none of those problems guarantee an answer soon. The Standard Model will eventually fail, but it may fail only for very difficult and expensive experiments, not a Large Hadron Collider but some sort of galactic-scale Large Earth Collider. It might be that none of the experiments or searches or theories those thousands of physicists are working on will tell them anything they didn’t already know. That’s the nightmare scenario.

I don’t know another field that has a nightmare scenario quite like this. In most fields, one experiment or another might fail, not just not giving the expected evidence but not teaching anything new. But most experiments teach us something new. We don’t have a theory, in almost any field, that has the potential to explain every observation up to the limits of our experiments, but which we still hope to disprove. Only the Standard Model is like that.

And while thousands of physicists are exposed to this nightmare scenario, the majority of physicists aren’t. Physics isn’t just the science of the reductionistic laws of the smallest constituents of matter. It’s also the study of physical systems, from the bubbling chaos of nuclear physics to the formation of planets and galaxies and black holes, to the properties of materials to the movement of bacteria on a petri dish and bees in a hive. It’s also the development of new methods, from better control of individual atoms and quantum states to powerful new tricks for calculation. For some, it can be the discovery, not of reductionistic laws of the smallest scales, but of general laws of the largest scales, of how systems with many different origins can show echoes of the same behavior.

Over time, more and more of those thousands of physicists break away from the nightmare scenario, “waking up” to new questions of these kinds. For some, motivated by puzzles and skill and the beauty of physics, the change is satisfying, a chance to work on ideas that are moving forward, connected with experiment or grounded in evolving mathematics. But if your motivation is really tied to those smallest scales, to that final reductionistic “why”, then such a shift won’t be satisfying, and this is a nightmare you won’t wake up from.

Me, I’m not sure. I’m a tool-builder, and I used to tell myself that tool-builders are always needed. But I find I do care, in the end, what my tools are used for. And as we approach the nightmare scenario, I’m not at all sure I know how to wake up.

Neutrinos and Guarantees

The Higgs boson, or something like it, was pretty much guaranteed.

When physicists turned on the Large Hadron Collider, we didn’t know exactly what they would find. Instead of the Higgs boson, there might have been many strange new particles with different properties. But we knew they had to find something, because without the Higgs boson or a good substitute, the Standard Model is inconsistent. Try to calculate what would happen at the LHC using the Standard Model without the Higgs boson, and you get literal nonsense: chances of particles scattering that are greater than one, a mathematical impossibility. Without the Higgs boson, the Standard Model had to be wrong, and had to go wrong specifically when that machine was turned on. In effect, the LHC was guaranteed to give a Nobel prize.

The LHC also searches for other things, like supersymmetric partner particles. It, and a whole zoo of other experiments, also search for dark matter, narrowing down the possibilities. But unlike the Higgs, none of these searches for dark matter or supersymmetric partners is guaranteed to find something new.

We’re pretty certain that something like dark matter exists, and that it is in some sense “matter”. Galaxies rotate, and masses bend light, in a way that seems only consistent with something new in the universe we didn’t predict. Observations of the whole universe, like the cosmic microwave background, let us estimate the properties of this something new, finding it to behave much more like matter than like radio waves or X-rays. So we call it dark matter.

But none of that guarantees that any of these experiments will find dark matter. The dark matter particles could have many different masses. They might interact faintly with ordinary matter, or with themselves, or almost not at all. They might not technically be particles at all. Each experiment makes some assumption, but no experiment yet can cover the most pessimistic possibility, that dark matter simply doesn’t interact in any usefully detectable way aside from by gravity.

Neutrinos also hide something new. The Standard Model predicts that neutrinos shouldn’t have mass, since it would screw up the way they mess with the mirror symmetry of the universe. But they do, in fact, have mass. We know because they oscillate, because they change when traveling, from one type to another, and that means those types must be mixes of different masses.

It’s not hard to edit the Standard Model to give neutrinos masses. But there’s more than one way to do it. Every way adds new particles we haven’t yet seen. And none of them tell us what neutrino masses should be. So there are a number of experiments, another zoo, trying to find out. (Maybe this one’s an aquarium?)

Are those experiments guaranteed to work?

Not so much as the LHC was to find the Higgs, but more than the dark matter experiments.

We particle physicists have a kind of holy book, called the Particle Data Book. It summarizes everything we know about every particle, and explains why we know it. It has many pages with many sections, but if you turn to page 10 of this section, you’ll find a small table about neutrinos. The table gives a limit: the neutrino mass is less than 0.8 eV (a mysterious unit called an electron-volt, about ten-to-the-minus-sixteen grams). That limit comes from careful experiments, using E=mc^2 to find what the missing mass could be when an electron-neutrino shoots out in radioactive beta decay. The limit is an inequality, “less than” rather than “equal to”, because the experiments haven’t detected any missing mass yet. So far, they only can tell us what they haven’t seen.

As these experiments get more precise, you could imagine them getting close enough to see some missing mass, and find the mass of a neutrino. And this would be great, and a guaranteed discovery, except that the neutrino they’re measuring isn’t guaranteed to have a mass at all.

We know the neutrino types have different masses, because they oscillate as they travel between the types. But one of the types might have zero mass, and it could well be the electron-neutrino. If it does, then careful experiments on electron-neutrinos may never give us a mass.

Still, there’s a better guarantee than for dark matter. That’s because we can do other experiments, to test the other types of neutrino. These experiments are harder to do, and the bounds they get are less precise. But if the electron neutrino really is massless, then we could imagine getting better and better at these different experiments, until one of them measures something, detecting some missing mass.

(Cosmology helps too. Wiggles in the shape of the universe gives us an estimate of the total, the mass of all the neutrinos averaged together. Currently, it gives another upper bound, but it could give a lower bound as well, which could be used along with weaker versions of the other experiments to find the answer.)

So neutrinos aren’t quite the guarantee the Higgs was, but they’re close. As the experiments get better, key questions will start to be answerable. And another piece of beyond-the-standard-model physics will be understood.

Stories Backwards and Forwards

You can always start with “once upon a time”…

I come up with tricks to make calculations in particle physics easier. That’s my one-sentence story, or my most common one. If I want to tell a longer story, I have more options.

Here’s one longer story:

I want to figure out what Nature is telling us. I want to take all the data we have access to that has anything to say about fundamental physics, every collider and gravitational wave telescope and ripple in the overall structure of the universe, and squeeze it as hard as I can until something comes out. I want to make sure we understand the implications of our current best theories as well as we can, to as high precision as we can, because I want to know whether they match what we see.

To do that, I am starting with a type of calculation I know how to do best. That’s both because I can make progress with it, and because it will be important for making these inferences, for testing our theories. I am following a hint in a theory that definitely does not describe the real world, one that is both simpler to work with and surprisingly complex, one that has a good track record, both for me and others, for advancing these calculations. And at the end of the day, I’ll make our ability to infer things from Nature that much better.

Here’s another:

Physicists, unknowing, proposed a kind of toy model, one often simpler to work with but not necessarily simpler to describe. Using this model, they pursued increasingly elaborate calculations, and time and time again, those calculations surprised them. The results were not random, not a disorderly mess of everything they could plausibly have gotten. Instead, they had structure, symmetries and patterns and mathematical properties that the physicists can’t seem to explain. If we can explain them, we will advance our knowledge of models and theories and ideas, geometry and combinatorics, learning more about the unexpected consequences of the rules we invent.

We can also help the physicists advance physics, of course. That’s a happy accident, but one that justifies the money and time, showing the rest of the world that understanding consequences of rules is still important and valuable.

These seem like very different stories, but they’re not so different. They change in order, physics then math or math then physics, backwards and forwards. By doing that, they change in emphasis, in where they’re putting glory and how they’re catching your attention. But at the end of the day, I’m investigating mathematical mysteries, and I’m advancing our ability to do precision physics.

(Maybe you think that my motivation must lie with one of these stories and not the other. One is “what I’m really doing”, the other is a lie made up for grant agencies.
Increasingly, I don’t think people work like that. If we are at heart stories, we’re retroactive stories. Our motivation day to day doesn’t follow one neat story or another. We move forward, we maybe have deep values underneath, but our accounts of “why” can and will change depending on context. We’re human, and thus as messy as that word should entail.)

I can tell more than two stories if I want to. I won’t here. But this is largely what I’m working on at the moment. In applying for grants, I need to get the details right, to sprinkle the right references and the right scientific arguments, but the broad story is equally important. I keep shuffling that story, a pile of not-quite-literal index cards, finding different orders and seeing how they sound, imagining my audience and thinking about what stories would work for them.

Amplitudes 2023 Retrospective

I’m back from CERN this week, with a bit more time to write, so I thought I’d share some thoughts about last week’s Amplitudes conference.

One thing I got wrong in last week’s post: I’ve now been told only 213 people actually showed up in person, as opposed to the 250-ish estimate I had last week. This may seem fewer than Amplitudes in Prague had, but it seems likely they had a few fewer show up than appeared on the website. Overall, the field is at least holding steady from year to year, and definitely has grown since the pandemic (when 2019’s 175 was already a very big attendance).

It was cool having a conference in CERN proper, surrounded by the history of European particle physics. The lecture hall had an abstract particle collision carved into the wood, and the visitor center would in principle have had Standard Model coffee mugs were they not sold out until next May. (There was still enough other particle physics swag, Swiss chocolate, and Swiss chocolate that was also particle physics swag.) I’d planned to stay on-site at the CERN hostel, but I ended up appreciated not doing that: the folks who did seemed to end up a bit cooped up by the end of the conference, even with the conference dinner as a chance to get out.

Past Amplitudes conferences have had associated public lectures. This time we had a not-supposed-to-be-public lecture, a discussion between Nima Arkani-Hamed and Beate Heinemann about the future of particle physics. Nima, prominent as an amplitudeologist, also has a long track record of reasoning about what might lie beyond the Standard Model. Beate Heinemann is an experimentalist, one who has risen through the ranks of a variety of different particle physics experiments, ending up well-positioned to take a broad view of all of them.

It would have been fun if the discussion erupted into an argument, but despite some attempts at provocative questions from the audience that was not going to happen, as Beate and Nima have been friends for a long time. Instead, they exchanged perspectives: on what’s coming up experimentally, and what theories could explain it. Both argued that it was best to have many different directions, a variety of experiments covering a variety of approaches. (There wasn’t any evangelism for particular experiments, besides a joking sotto voce mention of a muon collider.) Nima in particular advocated that, whether theorist or experimentalist, you have to have some belief that what you’re doing could lead to a huge breakthrough. If you think of yourself as just a “foot soldier”, covering one set of checks among many, then you’ll lose motivation. I think Nima would agree that this optimism is irrational, but necessary, sort of like how one hears (maybe inaccurately) that most new businesses fail, but someone still needs to start businesses.

Michelangelo Mangano’s talk on Thursday covered similar ground, but with different emphasis. He agrees that there are still things out there worth discovering: that our current model of the Higgs, for instance, is in some ways just a guess: a simplest-possible answer that doesn’t explain as much as we’d like. But he also emphasized that Standard Model physics can be “new physics” too. Just because we know the model doesn’t mean we can calculate its consequences, and there are a wealth of results from the LHC that improve our models of protons, nuclei, and the types of physical situations they partake in, without changing the Standard Model.

We saw an impressive example of this in Gregory Korchemsky’s talk on Wednesday. He presented an experimental mystery, an odd behavior in the correlation of energies of jets of particles at the LHC. These jets can include a very large number of particles, enough to make it very hard to understand them from first principles. Instead, Korchemsky tried out our field’s favorite toy model, where such calculations are easier. By modeling the situation in the limit of a very large number of particles, he was able to reproduce the behavior of the experiment. The result was a reminder of what particle physics was like before the Standard Model, and what it might become again: partial models to explain odd observations, a quest to use the tools of physics to understand things we can’t just a priori compute.

On the other hand, amplitudes does do a priori computations pretty well as well. Fabrizio Caola’s talk opened the conference by reminding us just how much our precise calculations can do. He pointed out that the LHC has only gathered 5% of its planned data, and already it is able to rule out certain types of new physics to fairly high energies (by ruling out indirect effects, that would show up in high-precision calculations). One of those precise calculations featured in the next talk, by Guilio Gambuti. (A FORM user, his diagrams were the basis for the header image of my Quanta article last winter.) Tiziano Peraro followed up with a technique meant to speed up these kinds of calculations, a trick to simplify one of the more computationally intensive steps in intersection theory.

The rest of Monday was more mathematical, with talks by Zeno Capatti, Jaroslav Trnka, Chia-Kai Kuo, Anastasia Volovich, Francis Brown, Michael Borinsky, and Anna-Laura Sattelberger. Borinksy’s talk felt the most practical, a refinement of his numerical methods complete with some actual claims about computational efficiency. Francis Brown discussed an impressively powerful result, a set of formulas that manages to unite a variety of invariants of Feynman diagrams under a shared explanation.

Tuesday began with what I might call “visitors”: people from adjacent fields with an interest in amplitudes. Alday described how the duality between string theory in AdS space and super Yang-Mills on the boundary can be used to get quite concrete information about string theory, calculating how the theory’s amplitudes are corrected by the curvature of AdS space using a kind of “bootstrap” method that felt nicely familiar. Tim Cohen talked about a kind of geometric picture of theories that extend the Standard Model, including an interesting discussion of whether it’s really “geometric”. Marko Simonovic explained how the integration techniques we develop in scattering amplitudes can also be relevant in cosmology, especially for the next generation of “sky mappers” like the Euclid telescope. This talk was especially interesting to me since this sort of cosmology has a significant presence at CEA Paris-Saclay. Along those lines an interesting paper, “Cosmology meets cohomology”, showed up during the conference. I haven’t had a chance to read it yet!

Just before lunch, we had David Broadhurst give one of his inimitable talks, complete with number theory, extremely precise numerics, and literary and historical references (apparently, Källén died flying his own plane). He also remedied a gap in our whimsically biological diagram naming conventions, renaming the pedestrian “double-box” as a (in this context, Orwellian) lobster. Karol Kampf described unusual structures in a particular Effective Field Theory, while Henriette Elvang’s talk addressed what would become a meaningful subtheme of the conference, where methods from the mathematical field of optimization help amplitudes researchers constrain the space of possible theories. Giulia Isabella covered another topic on this theme later in the day, though one of her group’s selling points is managing to avoid quite so heavy-duty computations.

The other three talks on Tuesday dealt with amplitudes techniques for gravitational wave calculations, as did the first talk on Wednesday. Several of the calculations only dealt with scattering black holes, instead of colliding ones. While some of the results can be used indirectly to understand the colliding case too, a method to directly calculate behavior of colliding black holes came up again and again as an important missing piece.

The talks on Wednesday had to start late, owing to a rather bizarre power outage (the lights in the room worked fine, but not the projector). Since Wednesday was the free afternoon (home of quickly sold-out CERN tours), this meant there were only three talks: Veneziano’s talk on gravitational scattering, Korchemsky’s talk, and Nima’s talk. Nima famously never finishes on time, and this time attempted to control his timing via the surprising method of presenting, rather than one topic, five “abstracts” on recent work that he had not yet published. Even more surprisingly, this almost worked, and he didn’t run too ridiculously over time, while still managing to hint at a variety of ways that the combinatorial lessons behind the amplituhedron are gradually yielding useful perspectives on more general realistic theories.

Thursday, Andrea Puhm began with a survey of celestial amplitudes, a topic that tries to build the same sort of powerful duality used in AdS/CFT but for flat space instead. They’re gradually tackling the weird, sort-of-theory they find on the boundary of flat space. The two next talks, by Lorenz Eberhardt and Hofie Hannesdottir, shared a collaborator in common, namely Sebastian Mizera. They also shared a common theme, taking a problem most people would have assumed was solved and showing that approaching it carefully reveals extensive structure and new insights.

Cristian Vergu, in turn, delved deep into the literature to build up a novel and unusual integration method. We’ve chatted quite a bit about it at the Niels Bohr Institute, so it was nice to see it get some attention on the big stage. We then had an afternoon of trips beyond polylogarithms, with talks by Anne Spiering, Christoph Nega, and Martijn Hidding, each pushing the boundaries of what we can do with our hardest-to-understand integrals. Einan Gardi and Ruth Britto finished the day, with a deeper understanding of the behavior of high-energy particles and a new more mathematically compatible way of thinking about “cut” diagrams, respectively.

On Friday, João Penedones gave us an update on a technique with some links to the effective field theory-optimization ideas that came up earlier, one that “bootstraps” whole non-perturbative amplitudes. Shota Komatsu talked about an intriguing variant of the “planar” limit, one involving large numbers of particles and a slick re-writing of infinite sums of Feynman diagrams. Grant Remmen and Cliff Cheung gave a two-parter on a bewildering variety of things that are both surprisingly like, and surprisingly unlike, string theory: important progress towards answering the question “is string theory unique?”

Friday afternoon brought the last three talks of the conference. James Drummond had more progress trying to understand the symbol letters of supersymmetric Yang-Mills, while Callum Jones showed how Feynman diagrams can apply to yet another unfamiliar field, the study of vortices and their dynamics. Lance Dixon closed the conference without any Greta Thunberg references, but with a result that explains last year’s mystery of antipodal duality. The explanation involves an even more mysterious property called antipodal self-duality, so we’re not out of work yet!

At Amplitudes 2023 at CERN

I’m at the big yearly conference of my sub-field this week, called Amplitudes. This year, surprisingly for the first time, it’s at the very appropriate location of CERN.

Somewhat overshadowed by the very picturesque Alps

Amplitudes keeps on growing. In 2019, we had 175 participants. We were on Zoom in 2020 and 2021, with many more participants, but that probably shouldn’t count. In Prague last year we had 222. This year, I’ve been told we have even more, something like 250 participants (the list online is bigger, but includes people joining on Zoom). We’ve grown due to new students, but also new collaborations: people from adjacent fields who find the work interesting enough to join along. This year we have mathematicians talking about D-modules, bootstrappers finding new ways to get at amplitudes in string theory, beyond-the-standard-model theorists talking about effective field theories, and cosmologists talking about the large-scale structure of the universe.

The talks have been great, from clear discussions of earlier results to fresh-off-the-presses developments, plenty of work in progress, and even one talk where the speaker’s opinion changed during the coffee break. As we’re at CERN, there’s also a through-line about the future of particle physics, with a chat between Nima Arkani-Hamed and the experimentalist Beate Heinemann on Tuesday and a talk by Michelangelo Mangano about the meaning of “new physics” on Thursday.

I haven’t had a ton of time to write, I keep getting distracted by good discussions! As such, I’ll do my usual thing, and say a bit more about specific talks in next week’s post.

It’s Only a Model

Last week, I said that the current best estimate for the age of the universe, 13.8 billion years old, is based on a mathematical model. In order to get that number, astronomers had to assume the universe evolved in a particular way, according to a model where the universe is composed of ordinary matter, dark matter, and dark energy. In other words, the age of the universe is a model-dependent statement.

Reading that, you might ask whether we can do better. What about a model-independent measurement of the age of the universe?

As intuitive as it might seem, we can’t actually do that. In fact, if we’re really strict about it, we can’t get a model-independent measurement of anything at all. Everything is based on a model.

Imagine stepping on your bathroom scale, getting a mass in kilograms. The number it gives you seems as objective as anything. But to get that number, you have to trust that a number of models are true. You have to model gravity, to assume that the scale’s measurement of your weight gives you the right mass based on the Earth’s surface gravity being approximately constant. You have to model the circuits and sensors in the scale, and be confident that you understand how they’re supposed to work. You have to model people: to assume that the company that made the scale tested it accurately, and that the people who sold it to you didn’t lie about where it came from. And finally, you have to model error: you know that the scale can’t possibly give you your exact weight, so you need a rough idea of just how far off it can reasonably be.

Everything we know is like this. Every measurement in science builds on past science, on our understanding of our measuring equipment and our trust in others. Everything in our daily lives comes through a network of assumptions about the world around us. Everything we perceive is filtered through instincts, our understanding of our own senses and knowledge of when they do and don’t trick us.

Ok, but when I say that the age of the universe is model-dependent, I don’t really mean it like that, right?

Everything we know is model-dependent, but only some model-dependence is worth worrying about. Your knowledge of your bathroom scale comes from centuries-old physics of gravity, widely-applied principles of electronics, and a trust in the function of basic products that serves you well in every other aspect of your life. The models that knowledge depends on aren’t really in question, especially not when you just want to measure your weight.

Some measurements we make in physics are like this too. When the experimental collaborations at the LHC measured the Higgs mass, they were doing something far from routine. But the models they based that measurement on, models of particle physics and particle detector electronics and their own computer code, are still so well-tested that it mostly doesn’t make sense to think of this as a model-dependent measurement. If we’re questioning the Higgs mass, it’s only because we’re questioning something much bigger.

The age of the universe, though, is trickier. Our most precise measurements are based on a specific model: we estimate what the universe is made of and how fast it’s expanding, plug it into our model of how the universe changes over time, and get an estimate for the age. You might suggest that we should just look out into the universe and find the oldest star, but that’s model-dependent too. Stars don’t have rings like trees. Instead, to estimate the age of a star we have to have some model for what kind of light it emits, and for how that light has changed over the history of the universe before it reached us.

These models are not quite as well-established as the models behind particle physics, let alone those behind your bathroom scale. Our models of stars are pretty good, applied to many types of stars in many different galaxies, but they do involve big, complicated systems involving many types of extreme and difficult to estimate physics. Star models get revised all the time, usually in minor ways but occasionally in more dramatic ones. Meanwhile, our model of the whole universe is powerful, but by its very nature much less-tested. We can test it on observations of the whole universe today, or on observations of the whole universe in the past (like the cosmic microwave background). And it works well for these, better than any other model. But it’s not inconceivable, not unrealistic, and above all not out of context, that another model could take its place. And if it did, many of the model-dependent measurements we’ve based on it will have to change.

So that’s why, while everything we know is model-dependent, some are model-dependent in a more important way. Some things, even if we feel they have solid backing, may well turn out to be wrong, in a way that we have reason to take seriously. The age of the universe is pretty well-established as these things go, but it still is one of those types of things, where there is enough doubt in our model that we can’t just take the measurement at face value.