Monthly Archives: August 2024

The Machine Learning for Physics Recipe

Last week, I went to a conference on machine learning for physics. Machine learning covers a huge variety of methods and ideas, several of which were on full display. But again and again, I noticed a pattern. The people who seemed to be making the best use of machine learning, the ones who were the most confident in their conclusions and getting the most impressive results, the ones who felt like they had a whole assembly line instead of just a prototype, all of them were doing essentially the same thing.

This post is about that thing. If you want to do machine learning in physics, these are the situations where you’re most likely to see a benefit. You can do other things, and they may work too. But this recipe seems to work over and over again.

First, you need simulations, and you need an experiment.

Your experiment gives you data, and that data isn’t easy to interpret. Maybe you’ve embedded a bunch of cameras in the antarctic ice, and your data tells you when they trigger and how bright the light is. Maybe you’ve surrounded a particle collision with layers silicon, and your data tells you how much electric charge the different layers absorb. Maybe you’ve got an array of telescopes focused on a black hole far far away, and your data are pixels gathered from each telescope.

You want to infer, from your data, what happened physically. Your cameras in the ice saw signs of a neutrino, you want to know how much energy it had and where it was coming from. Your silicon is absorbing particles, what kind are they and what processes did they come from? The black hole might have the rings predicted by general relativity, but it might have weirder rings from a variant theory.

In each case, you can’t just calculate the answer you need. The neutrino streams past, interacting with the ice and camera positions in unpredictable ways. People can write down clean approximations for particles in the highest-energy part of a collision, but once they start cooling down the process becomes so messy that no straightforward formula describes them. Your array of telescopes fuzz and pixellate and have to be assembled together in a complicated way, so that there is no one guaranteed answer you can find to establish what they saw.

In each case, though, you can use simulations. If you specify in advance the energy and path of the neutrino, you can use a computer to predict how much light your cameras should see. If you know what particles you started with, you can run sophisticated particle physics code to see what “showers” of particles you eventually find. If you have the original black hole image, you can fuzz and pixellate and take it apart to match what your array of telescopes will do.

The problem is, for the experiments, you can’t anticipate, and you don’t know in advance. And simulations, while cheaper than experiments, aren’t cheap. You can’t run a simulation for every possible input and then check them against the experiments. You need to fill in the gaps, run some simulations and then use some theory, some statistical method or human-tweaked guess, to figure out how to interpret your experiments.

Or, you can use Machine Learning. You train a machine learning model, one well-suited the task (anything from the old standby of boosted decision trees to an old fad of normalizing flows to the latest hotness of graph neural networks). You run a bunch of simulations, as many as you can reasonably afford, and you use that data for training, making a program that matches the input data you want to find with its simulated results. This program will be less reliable than your simulations, but it will run much faster. If it’s reliable enough, you can use it instead of the old human-made guesses and tweaks. You now have an efficient, reliable way to go from your raw experiment data to the physical questions you actually care about.

Crucially, each of the elements in this recipe is essential.

You need a simulation. If you just have an experiment with no simulation, then you don’t have a way to interpret the results, and training a machine to reproduce the experiment won’t tell you anything new.

You need an experiment. If you just have simulations, training a machine to reproduce them also doesn’t tell you anything new. You need some reason to want to predict the results of the simulations, beyond just seeing what happens in between which the machine can’t tell you.

And you need to not have anything better than the simulation. If you have a theory where you can write out formulas for what happens then you don’t need machine learning, you can interpret the experiments more easily without it. This applies if you’ve carefully designed your experiment to measure something easy to interpret, like the ratio of rates of two processes that should be exactly the same.

These aren’t the only things you need. You also need to do the whole thing carefully enough that you understand well your uncertainties, not just what the machine predicts but how often it gets it wrong, and whether it’s likely to do something strange when you use it on the actual experiment. But if you can do that, you have a reliable recipe, one many people have followed successfully before. You have a good chance of making things work.

This isn’t the only way physicists can use machine learning. There are people looking into something more akin to what’s called unsupervised learning, where you look for strange events in your data as clues for what to investigate further. And there are people like me, trying to use machine learning on the mathematical side, to guess new formulas and new heuristics. There is likely promise in many of these approaches. But for now, they aren’t a recipe.

HAMLET-Physics 2024

Back in January, I announced I was leaving France and leaving academia. Since then, it hasn’t made much sense for me to go to conferences, even the big conference of my sub-field or the conference I organized.

I did go to a conference this week, though. I had two excuses:

  1. The conference was here in Copenhagen, so no travel required.
  2. The conference was about machine learning.

HAMLET-Physics, or How to Apply Machine Learning to Experimental and Theoretical Physics, had the additional advantage of having an amusing acronym. Thanks to generous support by Carlsberg and the Danish Data Science Academy, they could back up their choice by taking everyone on a tour of Kronborg (better known in the English-speaking world as Elsinore).

This conference’s purpose was to bring together physicists who use machine learning, machine learning-ists who might have something useful to say to those physicists, and other physicists who don’t use machine learning yet but have a sneaking suspicion they might have to at some point. As a result, the conference was super-interdisciplinary, with talks by people addressing very different problems with very different methods.

Interdisciplinary conferences are tricky. It’s easy for the different groups of people to just talk past each other: everyone shows up, gives the same talk they always do, socializes with the same friends they always meet, then leaves.

There were a few talks that hit that mold, and were so technical only a few people understood. But most were better. The majority of the speakers did really well at presenting their work in a way that would be understandable and even exciting to people outside their field, while still having enough detail that we all learned something. I was particularly impressed by Thea Aarestad’s keynote talk on Tuesday, a really engaging view of how machine learning can be used under the extremely tight time constraints LHC experiments need to decide whether to record incoming data.

For the social aspect, the organizers had a cute/gimmicky/machine-learning-themed solution. Based on short descriptions and our public research profiles, they clustered attendees, plotting the connections between them. They then used ChatGPT to write conversation prompts between any two people on the basis of their shared interests. In practice, this turned out to be amusing but totally unnecessary. We were drawn to speak to each other not by conversation prompts, but by a drive to learn from each other. “Why do you do it that way?” was a powerful conversation-starter, as was “what’s the best way to do this?” Despite the different fields, the shared methodologies gave us strong reasons to talk, and meant that people were very rarely motivated to pick one of ChatGPT’s “suggestions”.

Overall, I got a better feeling for how machine learning is useful in physics (and am planning a post on that in future). I also got some fresh ideas for what to do myself, and a bit of a picture of what the future holds in store.

Why Quantum Gravity Is Controversial

Merging quantum mechanics and gravity is a famously hard physics problem. Explaining why merging quantum mechanics and gravity is hard is, in turn, a very hard science communication problem. The more popular descriptions tend to lead to misunderstandings, and I’ve posted many times over the years to chip away at those misunderstandings.

Merging quantum mechanics and gravity is hard…but despite that, there are proposed solutions. String Theory is supposed to be a theory of quantum gravity. Loop Quantum Gravity is supposed to be a theory of quantum gravity. Asymptotic Safety is supposed to be a theory of quantum gravity.

One of the great virtues of science and math is that we are, eventually, supposed to agree. Philosophers and theologians might argue to the end of time, but in math we can write down a proof, and in science we can do an experiment. If we don’t yet have the proof or the experiment, then we should reserve judgement. Either way, there’s no reason to get into an unproductive argument.

Despite that, string theorists and loop quantum gravity theorists and asymptotic safety theorists, famously, like to argue! There have been bitter, vicious, public arguments about the merits of these different theories, and decades of research doesn’t seem to have resolved them. To an outside observer, this makes quantum gravity seem much more like philosophy or theology than like science or math.

Why is there still controversy in quantum gravity? We can’t do quantum gravity experiments, sure, but if that were the problem physicists could just write down the possibilities and leave it at that. Why argue?

Some of the arguments are for silly aesthetic reasons, or motivated by academic politics. Some are arguments about which approaches are likely to succeed in future, which as always is something we can’t actually reliably judge. But the more justified arguments, the strongest and most durable ones, are about a technical challenge. They’re about something called non-perturbative physics.

Most of the time, when physicists use a theory, they’re working with an approximation. Instead of the full theory, they’re making an assumption that makes the theory easier to use. For example, if you assume that the velocity of an object is small, you can use Newtonian physics instead of special relativity. Often, physicists can systematically relax these assumptions, including more and more of the behavior of the full theory and getting a better and better approximation to the truth. This process is called perturbation theory.

Other times, this doesn’t work well. The full theory has some trait that isn’t captured by the approximations, something that hides away from these systematic tools. The theory has some important aspect that is non-perturbative.

Every proposed quantum gravity theory uses approximations like this. The theory’s proponents try to avoid these approximations when they can, but often they have to approximate and hope they don’t miss too much. The opponents, in turn, argue that the theory’s proponents are missing something important, some non-perturbative fact that would doom the theory altogether.

Asymptotic Safety is built on top of an approximation, one different from what other quantum gravity theorists typically use. To its proponents, work using their approximation suggests that gravity works without any special modifications, that the theory of quantum gravity is easier to find than it seems. Its opponents aren’t convinced, and think that the approximation is missing something important which shows that gravity needs to be modified.

In Loop Quantum Gravity, the critics think their approximation misses space-time itself. Proponents of Loop Quantum Gravity have been unable to prove that their theory, if you take all the non-perturbative corrections into account, doesn’t just roll up all of space and time into a tiny spiky ball. They expect that their theory should allow for a smooth space-time like we experience, but the critics aren’t convinced, and without being able to calculate the non-perturbative physics neither side can convince the other.

String Theory was founded and originally motivated by perturbative approximations. Later, String Theorists figured out how to calculate some things non-perturbatively, often using other simplifications like supersymmetry. But core questions, like whether or not the theory allows a positive cosmological constant, seem to depend on non-perturbative calculations that the theory gives no instructions for how to do. Some critics don’t think there is a consistent non-perturbative theory at all, that the approximations String Theorists use don’t actually approximate to anything. Even within String Theory, there are worries that the theory might try to resist approximation in odd ways, becoming more complicated whenever a parameter is small enough that you could use it to approximate something.

All of this would be less of a problem with real-world evidence. Many fields of science are happy to use approximations that aren’t completely rigorous, as long as those approximations have a good track record in the real world. In general though, we don’t expect evidence relevant to quantum gravity any time soon. Maybe we’ll get lucky, and studies of cosmology will reveal something, or an experiment on Earth will have a particularly strange result. But nature has no obligation to help us out.

Without evidence, though, we can still make mathematical progress. You could imagine someone proving that the various perturbative approaches to String Theory become inconsistent when stitched together into a full non-perturbative theory. Alternatively, you could imagine someone proving that a theory like String Theory is unique, that no other theory can do some key thing that it does. Either of these seems unlikely to come any time soon, and most researchers in these fields aren’t pursuing questions like that. But the fact the debate could be resolved means that it isn’t just about philosophy or theology. There’s a real scientific, mathematical controversy, one rooted in our inability to understand these theories beyond the perturbative methods their proponents use. And while I don’t expect it to be resolved any time soon, one can always hold out hope for a surprise.

Toy Models

In academia, scientists don’t always work with what they actually care about. A lot of the time, they use what academics call toy models. A toy model can be a theory with simpler mathematics than the theories that describe the real world, but it can also be something that is itself real, just simpler or easier to work with, like nematodes, fruit flies, or college students.

Some people in industry seem to think this is all academics do. I’ve seen a few job ads that emphasize experience dealing with “real-world data”, and a few people skeptical that someone used to academia would be able to deal with the messy challenges of the business world.

There’s a grain of truth to this, but I don’t think industry has a monopoly on mess. To see why, let’s think about how academics write computer code.

There are a lot of things that one is in-principle supposed to do to code well, and most academics do none of them. Good code has test suites, so that if you change something you can check whether it still works by testing it on all the things that could go wrong. Good code is modular, with functions doing specific things and re-used whenever appropriate. Good code follows shared conventions, so that others can pick up your code and understand how you did it.

Some academics do these things, for example those who build numerical simulations on supercomputers. But for most academics, coding best-practices range from impractical to outright counterproductive. Testing is perhaps the clearest example. To design a test suite, you have to have some idea what kinds of things your code will run into, what kind of input you expect what the output is supposed to be. Many academic projects, though, are the first of their kind. Academics code up something to do a calculation nobody has done before, not knowing the result, or they make code to analyze a dataset nobody has worked with before. By the time they understand the problem well enough to write a test suite, they’ve already solved the problem, and they’re on to the next project, which may need something totally different.

From the perspective of these academics, if you have a problem well-defined enough that you can build a test suite, well enough that you can have stable conventions and reusable functions…then you have a toy model, not a real problem from the real world.

…and of course, that’s not quite fair either, right?

The truth is, academics and businesspeople want to work with toy models. Toy models are well-behaved, and easy, and you can do a lot with them. The real world isn’t a toy model…but it can be, if you make it one.

This means planning your experiments, whether in business or in science. It means making sure the data you gather is labeled and organized before you begin. It means coming up with processes, and procedures, and making as much of the work as possible a standardized, replicable thing. That’s desirable regardless, whether you’re making a consistent product instead of artisanal one-offs or a well-documented scientific study that another team can replicate.

Academia and industry both must handle mess. They handle different kinds of mess in different circumstances, and manage it in different ways, and this can be a real challenge for someone trying to go from one world to another. But neither world is intrinsically messier or cleaner. Nobody has a monopoly on toy models.

The “Who” of Fixing Academic Publishing

I was on the debate team in high school. There’s a type of debate, called Policy, where one team proposes a government policy and the other team argues the policy is bad. The rules of Policy debate don’t say who the debaters are pretending to be: they could be congresspeople, cabinet members, or staff at a think tank. This creates ambiguity, and nerds are great at exploiting ambiguity. A popular strategy was to argue that the opponents had a perfectly good policy, but were wrong about who should implement it. This had reasonable forms (no, congress does not have the power to do X) but could also get very silly (the crux of one debate was whether the supreme court or the undersecretary of the TSA was the best authority to usher in a Malthusian dictatorship). When debating policy, “who” could be much more important than “what”.

Occasionally, when I see people argue that something needs to be done, I ask myself this question. Who, precisely, should do it?

Recently, I saw a tweet complaining about scientific publishing. Physicists put their work out for free on arXiv.org, then submit that work to journals, which charge huge fees either to the scientists themselves or to libraries that want access to the work. It’s a problem academics complain about frequently, but usually we act like it’s something we should fix ourselves, a kind of grassroots movement to change our publication and hiring culture.

This tweet, surprisingly, didn’t do that. Instead, it seemed to have a different “who” in mind. The tweet argued that the stranglehold of publishers like Elsevier on academic publishing is a waste of taxpayer money. The implication, maybe intended maybe not, is that the problem should be fixed by the taxpayers: that is, by the government.

Which in turn got me thinking, what could that look like?

I could imagine a few different options, from the kinds of things normal governments do to radical things that would probably never happen.

First, the most plausible strategy: collective negotiation. Particle physicists don’t pay from our own grants to publish papers, and we don’t pay to read them. Instead, we have a collective agreement, called SCOAP3, where the big institutions pay together each year to guarantee open access. The University of California system tried to negotiate a similar agreement a few years back, not just for physicists but for all fields. You could imagine governments leaning on this, with the university systems of whole countries negotiating a fixed payment. The journals would still be getting paid, but less.

Second, less likely but not impossible: governments could use the same strategies against the big publishers that they use against other big companies. This could be antitrust action (if you have to publish in Nature to get hired, are they really competing with anybody?), or even some kind of price controls. The impression I get is that when governments do try to change scientific publishing they usually do it via restrictions on the scientists (such as requiring them to publish open-access), while this would involve restrictions on the publishers.

Third, governments could fund alternative institutions to journals. They could put more money into websites like arXiv.org and its equivalents in other fields or fund an alternate review process to vet papers like journal referees do. There are existing institutions they could build on, or they could create their own.

Fourth, you could imagine addressing the problem on the job market side, with universities told not to weigh the prestige of journals when considering candidates. This seems unlikely to happen, and that’s probably a good thing, because it’s very micromanagey. Still, I do think that both grants and jobs could do with less time and effort spent attempting to vet candidates and more explicit randomness.

Fifth, you could imagine governments essentially opting out of the game altogether. They could disallow spending any money from publicly funded grants or universities on open-access fees or subscription fees, pricing most scientists out of the journal system. Journals would either have to radically lower their prices so that scientists could pay for them out of pocket, or more likely go extinct. This does have the problem that if only some countries did it, their scientists would have a harder time in other countries’ job markets. And of course, many critics of journals just want the journals to make less obscene profits, and not actually go extinct.

Most academics I know agree that something is deeply wrong with how academic journals work. While the situation might be solved at the grassroots level, it’s worth imagining what governments might do. Realistically, I don’t expect them to do all that much. But stranger things have gotten political momentum before.