Congratulations to John Hopfield and Geoffrey Hinton!

The 2024 Physics Nobel Prize was announced this week, awarded to John Hopfield and Geoffrey Hinton for using physics to propose foundational ideas in the artificial neural networks used for machine learning.

If the picture above looks off-center, it’s because this is the first time since 2015 that the Physics Nobel has been given to two, rather than three, people. Since several past prizes bundled together disparate ideas in order to make a full group of three, it’s noteworthy that this year the committee decided that each of these people deserved 1/2 the prize amount, without trying to find one more person to water it down further.

Hopfield was trained as a physicist, working in the broad area known as “condensed matter physics”. Condensed matter physicists use physics to describe materials, from semiconductors to crystals to glass. Over the years, Hopfield started using this training less for the traditional subject matter of the field and more to study the properties of living systems. He moved from a position in the physics department of Princeton to chemistry and biology at Caltech. While at Caltech he started studying neuroscience and proposed what are now known as Hopfield networks as a model for how neurons store memory. Hopfield networks have very similar properties to a more traditional condensed matter system called a “spin glass”, and from what he knew about those systems Hopfield could make predictions for how his networks would behave. Those networks would go on to be a major inspiration for the artificial neural networks used for machine learning today.

Hinton was not trained as a physicist, and in fact has said that he didn’t pursue physics in school because the math was too hard! Instead, he got a bachelor’s degree in psychology, and a PhD in the at the time nascent field of artificial intelligence. In the 1980’s, shortly after Hopfield published his network, Hinton proposed a network inspired by a closely related area of physics, one that describes temperature in terms of the statistics of moving particles. His network, called a Boltzmann machine, would be modified and made more efficient over the years, eventually becoming a key part of how artificial neural networks are “trained”.

These people obviously did something impressive. Was it physics?

In 2014, the Nobel prize in physics was awarded to the people who developed blue LEDs. Some of these people were trained as physicists, some weren’t: Wikipedia describes them as engineers. At the time, I argued that this was fine, because these people were doing “something physicists are good at”, studying the properties of a physical system. Ultimately, the thing that ties together different areas of physics is training: physicists are the people who study under other physicists, and go on to collaborate with other physicists. That can evolve in unexpected directions, from more mathematical research to touching on biology and social science…but as long as the work benefits from being linked to physics departments and physics degrees, it makes sense to say it “counts as physics”.

By that logic, we can probably call Hopfield’s work physics. Hinton is more uncertain: his work was inspired by a physical system, but so are other ideas in computer science, like simulated annealing. Other ideas, like genetic algorithms, are inspired by biological systems: does that mean they count as biology?

Then there’s the question of the Nobel itself. If you want to get a Nobel in physics, it usually isn’t enough to transform the field. Your idea has to actually be tested against nature. Theoretical physics is its own discipline, with several ideas that have had an enormous influence on how people investigate new theories, ideas which have never gotten Nobels because the ideas were not intended, by themselves, to describe the real world. Hopfield networks and Boltzmann machines, similarly, do not exist as physical systems in the real world. They exist as computer simulations, and it is those computer simulations that are useful. But one can simulate many ideas in physics, and that doesn’t tend to be enough by itself to get a Nobel.

Ultimately, though, I don’t think this way of thinking about things is helpful. The Nobel isn’t capable of being “fair”, there’s no objective standard for Nobel-worthiness, and not much reason for there to be. The Nobel doesn’t determine which new research gets funded, nor does it incentivize anyone (except maybe Brian Keating). Instead, I think the best way of thinking about the Nobel these days is a bit like Disney.

When Disney was young, its movies had to stand or fall on their own merits. Now, with so many iconic movies in its history, Disney movies are received in the context of that history. Movies like Frozen or Moana aren’t just trying to be a good movie by themselves, they’re trying to be a Disney movie, with all that entails.

Similarly, when the Nobel was young, it was just another award, trying to reward things that Alfred Nobel might have thought deserved rewarding. Now, though, each Nobel prize is expected to be “Nobel-like”, an analogy between each laureate and the laureates of the past. When new people are given Nobels the committee is on some level consciously telling a story, saying that these people fit into the prize’s history.

This year, the Nobel committee clearly wanted to say something about AI. There is no Nobel prize for computer science, or even a Nobel prize for mathematics. (Hinton already has the Turing award, the most prestigious award in computer science.) So to say something about AI, the Nobel committee gave rewards in other fields. In addition to physics, this year’s chemistry award went in part to the people behind AlphaFold2, a machine learning tool to predict what shapes proteins fold into. For both prizes, the committee had a reasonable justification. AlphaFold2 genuinely is an amazing advance in the chemistry of proteins, a research tool like nothing that came before. And the work of Hopfield and Hinton did lead ideas in physics to have an enormous impact on the world, an impact that is worth recognizing. Ultimately, though, whether or not these people should have gotten the Nobel doesn’t depend on that justification. It’s an aesthetic decision, one that (unlike Disney’s baffling decision to make live-action remakes of their most famous movies) doesn’t even need to impress customers. It’s a question of whether the action is “Nobel-ish” enough, according to the tastes of the Nobel committee. The Nobel is essentially expensive fanfiction of itself.

And honestly? That’s fine. I don’t think there’s anything else they could be doing at this point.

At Quanta This Week, With a Piece on Multiple Imputation

I’ve got another piece in Quanta Magazine this week.

While my past articles in Quanta have been about physics, this time I’m stretching my science journalism muscles in a new direction. I was chatting with a friend who works for a pharmaceutical company, and he told me about a statistical technique that sounded ridiculous. Luckily, he’s a patient person, and after annoying him and a statistician family member for a while I understood that the technique actually made sense. Since I love sharing counterintuitive facts, I thought this would be a great story to share with Quanta’s readers. I then tracked down more statisticians, and annoyed them in a more professional way, finally resulting in the Quanta piece.

The technique is called multiple imputation, and is a way to deal with missing data. By filling in (“imputing”) missing information with good enough guesses, you can treat a dataset with missing data as if it was complete. If you do this imputation multiple times with the help of a source of randomness, you can also model how uncertain those guesses are, so your final statistical estimates are as uncertain as they ought to be. That, in a nutshell, is multiple imputation.

In the piece, I try to cover the key points: how the technique came to be, how it spread, and why people use it. To complement that, in this post I wanted to get a little bit closer to the technical details, and say a bit about why some of the workarounds a naive physicist would come up with don’t actually work.

If you’re anything like me, multiple imputation sounds like a very weird way to deal with missing data. In order to fill in missing data, you have to use statistical techniques to find good guesses. Why can’t you just use the same techniques to analyze the data in the first place? And why do you have to use a random number generator to model your uncertainty, instead of just doing propagation of errors?

It turns out, you can sort of do both of these things. Full Information Maximum Likelihood is a method where you use all the data you have, and only the data you have, without imputing anything or throwing anything out. The catch is that you need a model, one with parameters you can try to find the most likely values for. Physicists usually do have a model like this (for example, the Standard Model), so I assumed everyone would. But for many things you want to measure in social science and medicine, you don’t have any such model, so multiple imputation ends up being more versatile in practice.

(If you want more detail on this, you need to read something written by actual statisticians. The aforementioned statistician family member has a website here that compares and contrasts multiple imputation with full information maximum likelihood.)

What about the randomness? It turns out there is yet another technique, called Fractional Imputation. While multiple imputation randomly chooses different values to impute, fractional imputation gives each value a weight based on the chance for it to come up. This gives the same result…if you can compute the weights, and store all the results. The impression I’ve gotten is that people are working on this, but it isn’t very well-developed.

“Just do propagation of errors”, the thing I wanted to suggest as a physicist, is much less of an option. In many of these datasets, you don’t attribute errors to the base data points to begin with. And on the other hand, if you want to be more sophisticated, then something like propagation of errors is too naive. You have a variety of different variables, correlated with each other in different ways, giving a complicated multivariate distribution. Propagation of errors is already pretty fraught when you go beyond linear relationships (something they don’t tend to tell baby physicists), using it for this would be pushing it rather too far.

The thing I next wanted to suggest, “just carry the distribution through the calculation”, turns out to relate to something I’ve called the “one philosophical problem of my sub-field”. In the area of physics I’ve worked in, a key question is what it means to have “done” an integral. Here, one can ask what it means to do a calculation on a distribution. In both cases, the end goal is to get numbers out: physics predictions on the one hand, statistical estimates on the other. You can get those numbers by “just” doing numerics, using randomness and approximations to estimate the number you’re interested in. And in a way, that’s all you can do. Any time you “just do the integral” or “just carry around the distribution”, the thing you get in the end is some function: it could be a well-understood function like a sine or log, or it could be an exotic function someone defined for that purpose. But whatever function you get, you get numbers out of it the same way. A sine or a log, on a computer, is just an approximation scheme, a program that outputs numbers.

(But we do still care about analytic results, we don’t “just” do numerics. That’s because understanding the analytics helps us do numerics better, we can get more precise numbers faster and more stably. If you’re just carrying around some arbitrarily wiggly distribution, it’s not clear you can do that.)

So at this point, I get it. I’m still curious to see how Fractional Imputation develops, and when I do have an actual model I’d lean to wanting to use Full Information Maximum Likelihood instead. (And there are probably some other caveats I may need to learn at some point!) But I’m comfortable with the idea that Multiple Imputation makes sense for the people using it.

The Mistakes Are the Intelligence

There’s a lot of hype around large language models, the foundational technology behind services like ChatGPT. Representatives of OpenAI have stated that, in a few years, these models might have “PhD-level intelligence“. On the other hand, at the time, ChatGPT couldn’t count the number of letter “r”s in the word “strawberry”. The model and the setup around it has improved, and GPT-4o1 apparently now gets the correct 3 “r”s…but I’m sure it makes other silly mistakes, mistakes an intelligent human would never make.

The mistakes made by large language models are important, due to the way those models are used. If people are going to use them for customer service, writing transcripts, or editing grammar, they don’t want to introduce obvious screwups. (Maybe this means they shouldn’t use the models this way!)

But the temptation is to go further, to say that these mistakes are proof that these models are, and will always be, dumb, not intelligent. And that’s not the right way to think about intelligence.

When we talk about intelligent people, when we think about measuring things like IQ, we’re looking at a collection of different traits. These traits typically go together in humans: a human who is good at one will usually be good at the others. But from the perspective of computer science, these traits are very different.

Intelligent people tend to be good at following complex instructions. They can remember more, and reason faster. They can hold a lot in their head at once, from positions of objects to vocabulary.

These are all things that computers, inherently, are very good at. When Turing wrote down his abstract description of a computer, he imagined a machine with infinite memory, able to follow any instructions with perfect fidelity. Nothing could live up to that ideal, but modern computers are much closer to it than humans. “Computer” used to be a job, with rooms full of people (often women) hired to do calculations for scientific projects. We don’t do that any more, machines have made that work superfluous.

What’s more, the kind of processing a Turing machine does is probably the only way to reliably answer questions. If you want to make sure you get the correct answer every time, then it seems that you can’t do better than to use a sufficiently powerful computer.

But while computer-the-machine replaced computer-the-job, mathematician-the-job still exists. And that’s because not all intelligence is about answering questions reliably.

Alexander Grothendieck was a famous mathematician, known for his deep insights and powerful ideas. According to legend, when giving a talk referring to prime numbers, someone in the audience asked him to name a specific prime. He named 57.

With a bit of work, any high-school student can figure out that 57, which equals 3 times 19, isn’t a prime number. A computer can easily figure out that 57 is not a prime number. Even ChatGPT knows that 57 is not a prime number.

But this doesn’t mean that Grothendieck was dumber than a high school student, or dumber than ChatGPT. Grothendieck was using a different kind of intelligence, the heuristic kind.

Heuristics are unreliable reasoning. They’re processes that get the right answer some of the time, but not all of the time. Because of that, though, they don’t have the same limits as reliable computer programs. Pick the right situation and the right conditions, and a heuristic can give you an answer faster than you could possibly get by following reliable rules.

Intelligent humans follow instructions well, but they also have good heuristics. They solve problems creatively, sometimes problems that are very hard for computers to address. People like Grothendieck make leaps of mathematical reasoning, guessing at the right argument before they have completely fleshed out a proof. This kind of intelligence is error-prone: rely on it, and you might claim 57 is prime. But at the moment, it’s our only intellectual advantage over machines.

Ultimately, ChatGPT is an advance in language processing, and language is a great example. Sentences don’t have definite meaning, we interpret what we read and hear in context, and sometimes our interpretation is wrong. Sometimes we hear words no-one actually said! It’s impossible, both for current technology and for the human brain, to process general text in a 100% reliable way. So large language models like GPT don’t do it reliably. They use an approximate model, a big complicated pile of rules tweaked over and over again until, enough of the time, they get the next word right in a text.

The kind of heuristic reasoning done by large language models is more effective than many people expected. Being able to predict the next word in a text unreliably also means you can write code unreliably, or count things unreliably, or do math unreliably. You can’t do any of these things as well as an appropriately-chosen human, at least not with current resources.

But in the longer run, heuristic intelligence is precisely the type of intelligence we should aspire to…or fear. Right now, we hire humans to do intellectual work because they have good heuristics. If we could build a machine with equivalent or better heuristics for those tasks, then people would hire a lot fewer humans. And if you’re worried about AI taking over the world, you’re worried about AI coming up with shortcuts to our civilization, tricks we couldn’t anticipate or plan against that destroy everything we care about. Those tricks can’t come from following rules: if they did, we could discover them just as easily. They would have to come from heuristics, sideways solutions that don’t work all the time but happen to work the one time that matters.

So yes, until the latest release, ChatGPT couldn’t tell you how many “r”s are in “strawberry”. Counting “r”s is something computers could already do, because it’s something that can be done by following reliable rules. It’s also something you can do easily, if you follow reliable rules. ChatGPT impresses people because it can do some of the things you do, that can’t be done with reliable rules. If technology like it has any chance of changing the world, those are the kinds of things it will have to be able to do.

The Bystander Effect for Reviewers

I probably came off last week as a bit of an extreme “journal abolitionist”. This week, I wanted to give a couple caveats.

First, as a commenter pointed out, the main journals we use in my field are run by nonprofits. Physical Review Letters, the journal where we publish five-page papers about flashy results, is run by the American Physical Society. The Journal of High-Energy Physics, where we publish almost everything else, is run by SISSA, the International School for Advanced Studies in Trieste. (SISSA does use Springer, a regular for-profit publisher, to do the actual publishing.)

The journals are also funded collectively, something I pointed out here before but might not have been obvious to readers of last week’s post. There is an agreement, SCOAP3, where research institutions band together to pay the journals. Authors don’t have to pay to publish, and individual libraries don’t have to pay for subscriptions.

And this is a lot better than the situation in other fields, yeah! Though I’d love to quantify how much. I haven’t been able to find a detailed breakdown, but SCOAP3 pays around 1200 EUR per article published. What I’d like to do (but not this week) is to compare this to what other fields pay, as well as to publishing that doesn’t have the same sort of trapped audience, and to online-only free journals like SciPost. (For example, publishing actual physical copies of journals at this point is sort of a vanity thing, so maybe we should compare costs to vanity publishers?)

Second, there’s reviewing itself. Even without traditional journals, one might still want to keep peer review.

What I wanted to understand last week was what peer review does right now, in my field. We read papers fresh off the arXiv, before they’ve gone through peer review. Authors aren’t forced to update the arXiv with the journal version of their paper, if they want another version, even if that version was rejected by the reviewers, then they’re free to do so, and most of us wouldn’t notice. And the sort of in-depth review that happens in peer review also happens without it. When we have journal clubs and nominate someone to present a recent paper, or when we try to build on a result or figure out why it contradicts something we thought we knew, we go through the same kind of in-depth reading that (in the best cases) reviewers do.

But I think I’ve hit upon something review does that those kinds of informal things don’t. It gets us to speak up about it.

I presented at a journal club recently. I read through a bombastic new paper, figured out what I thought was wrong with it, and explained it to my colleagues.

But did I reach out to the author? No, of course not, that would be weird.

Psychologists talk about the bystander effect. If someone collapses on the street, and you’re the only person nearby, you’ll help. If you’re one of many, you’ll wait and see if someone else helps instead.

I think there’s a bystander effect for correcting people. If someone makes a mistake and publishes something wrong, we’ll gripe about it to each other. But typically, we won’t feel like it’s our place to tell the author. We might get into a frustrating argument, there wouldn’t be much in it for us, and it might hurt our reputation if the author is well-liked.

(People do speak up when they have something to gain, of course. That’s why when you write a paper, most of the people emailing you won’t be criticizing the science: they’ll be telling you you need to cite them.)

Peer review changes the expectations. Suddenly, you’re expected to criticize, it’s your social role. And you’re typically anonymous, you don’t have to worry about the consequences. It becomes a lot easier to say what you really think.

(It also becomes quite easy to say lazy stupid things, of course. This is why I like setups like SciPost, where reviews are made public even when the reviewers are anonymous. It encourages people to put some effort in, and it means that others can see that a paper was rejected for bad reasons and put less stock in the rejection.)

I think any new structure we put in place should keep this feature. We need to preserve some way to designate someone a critic, to give someone a social role that lets them let loose and explain why someone else is wrong. And having these designated critics around does help my field. The good criticisms get implemented in the papers, the authors put the new versions up on arXiv. Reviewing papers for journals does make our science better…even if none of us read the journal itself.

Why Journals Are Sticky

An older professor in my field has a quirk: every time he organizes a conference, he publishes all the talks in a conference proceeding.

In some fields, this would be quite normal. In computer science, where progress flows like a torrent, new developments are announced at conferences long before they have the time to be written up carefully as a published paper. Conference proceedings are summaries of what was presented at the conference, published so that anyone can catch up on the new developments.

In my field, this is rarer. A few results at each conference will be genuinely new, never-before-published discoveries. Most, though, are talks on older results, results already available online. Writing them up again in summarized form as a conference proceeding seems like a massive waste of time.

The cynical explanation is that this professor is doing this for the citations. Each conference proceeding one of his students publishes is another publication on their CV, another work that they can demand people cite whenever someone uses their ideas or software, something that puts them above others’ students without actually doing any extra scientific work.

I don’t think that’s how this professor thinks about it, though. He certainly cares about his students’ careers, and will fight for them to get cited as much as possible. But he asks everyone at the conference to publish a proceeding, not just his students. I think he’d argue that proceedings are helpful, that they can summarize papers in new ways and make them more accessible. And if they give everyone involved a bit more glory, if they let them add new entries to their CV and get fancy books on their shelves, so much the better for everyone.

My guess is, he really believes something like that. And I’m fairly sure he’s wrong.

The occasional conference proceeding helps, but only because it makes us more flexible. Sometimes, it’s important to let others know about a new result that hasn’t been published yet, and we let conference proceedings go into less detail than a full published paper, so this can speed things up. Sometimes, an old result can benefit from a new, clearer explanation, which normally couldn’t be published without it being a new result (or lecture notes). It’s good to have the option of a conference proceeding.

But there is absolutely no reason to have one for every single talk at a conference.

Between the cynical reason and the explicit reason, there’s the banal one. This guy insists on conference proceedings because they were more useful in the past, because they’re useful in other fields, and because he’s been doing them himself for years. He insists on them because to him, they’re a part of what it means to be a responsible scientist.

And people go along with it. Because they don’t want to get into a fight with this guy, certainly. But also because it’s a bit of extra work that could give a bit of a career boost, so what’s the harm?

I think something similar to this is why academic journals still work the way they do.

In the past, journals were the way physicists heard about new discoveries. They would get each edition in the mail, and read up on new developments. The journal needed to pay professional copyeditors and printers, so they needed money, and they got that money from investors by being part of for-profit companies that sold shares.

Now, though, physicists in my field don’t read journals. We publish our new discoveries online on a non-profit website, formatting them ourselves with software that uses the same programming skills we use in the rest of our professional lives. We then discuss the papers in email threads and journal club meetings. When a paper is wrong, or missing something important, we tell the author, and they fix it.

Oh, and then after that we submit the papers to the same for-profit journals and the same review process that we used to use before we did all this, listing the journals that finally accept the papers on our CVs.

Why do we still do that?

Again, you can be cynical. You can accuse the journals of mafia-ish behavior, you can tie things back to the desperate need to publish in high-ranked journals to get hired. But I think the real answer is a bit more innocent, and human, than that.

Imagine that you’re a senior person in the field. You may remember the time before we had all of these nice web-based publishing options, when journals were the best way to hear about new developments. More importantly than that, though, you’ve worked with these journals. You’ve certainly reviewed papers for them, everyone in the field does that, but you may have also served as an editor, tracking down reviewers and handling communication between the authors and the journal. You’ve seen plenty of cases where the journal mattered, where tracking down the right reviewers caught a mistake or shot down a crackpot’s ambitions, where the editing cleaned something up or made a work more appear more professional. You think of the journals as having high standards, standards you have helped to uphold: when choosing between candidates for a job, you notice that one has several papers in Physical Review Letters, and remember papers you’ve rejected for not meeting what you intuited were that journal’s standards. To you, journals are a key part of being a responsible scientist.

Does any of that make journals worth it, though?

Well, that depends on costs. It depends on alternatives. It depends not merely on what the journals catch, but on how often they do it, and how much would have been caught on its own. It depends on whether the high standards you want to apply to job applicants are already being applied by the people who write their recommendation letters and establish their reputations.

And you’re not in a position to evaluate any of that, of course. Few people are, who don’t spend a ton of time thinking about scientific publishing.

And thus, for the non-senior people, there’s not much reason to push back. One hears a few lofty speeches about Elsevier’s profits, and dreams about the end of the big for-profit journals. But most people aren’t cut out to be crusaders or reformers, especially when they signed up to be scientists. Most people are content not to annoy the most respected people in their field by telling them that something they’ve spent an enormous amount of time on is now pointless. Most people want to be seen as helpful by these people, to not slack off on work like reviewing that they argue needs doing.

And most of us have no reason to think we know that much better, anyway. Again, we’re scientists, not scientific publishing experts.

I don’t think it’s good practice to accuse people of cognitive biases. Everyone thinks they have good reasons to believe what they believe, and the only way to convince them is to address those reasons.

But the way we use journals in physics these days is genuinely baffling. It’s hard to explain, it’s the kind of thing people have been looking quizzically at for years. And this kind of explanation is the only one I’ve found that matches what I’ve seen. Between the cynical explanation and the literal arguments, there’s the basic human desire to do what seems like the responsible thing. That tends to explain a lot.

Grad Students Don’t Have Majors

A pet peeve of mine:

Suppose you’re writing a story, and one of your characters is studying for a PhD in linguistics. You could call them a grad student or a PhD student, a linguistics student or even just a linguist. But one thing you absolutely shouldn’t call them is a linguistics major.

Graduate degrees, from the PhD to medical doctors to masters degrees, don’t have majors. Majors are a very specific concept, from a very specific system: one that only applies to undergraduate degrees, and even there is uncommon to unheard of in most of the world.

You can think of “major” as short for “major area of study”. In many universities in the US, bachelor’s degree students enter not as students of a particular topic, but as “undecided” students. They then have some amount of time to choose a major. Majors define some of your courses, but not all of them. You can also have “minors”, minor areas of study where you take a few courses from another department, and you typically have to take some number of general courses from other departments as well. Overall, the US system for bachelor’s students is quite flexible. The idea is that students can choose from a wide range of courses offered by different departments at a university, focusing on one department’s program but sampling from many. The major is your major focus, but not your only focus.

Basically no other degree works this way.

In Europe, bachelor’s degree students sign up as students of a specific department. By default, all of their courses will be from that department. If you have to learn more math, or writing skills, then normally your department will have its own math or writing course, focused on the needs of their degree. It can be possible to take courses from other departments, but it’s not common and it’s often not easy, sometimes requiring special permission. You’re supposed to have done your general education as a high school student, and be ready to focus on a particular area.

Graduate degrees in the US also don’t work this way. A student in medical school or law school isn’t a medicine major or a law major, they’re a med student or a law student. They typically don’t take courses from the rest of the university at that point, just from the med school or the law school. A student studying for an MBA (Master’s in Business Administration) is similarly a business student, not the business major they might have been during their bachelor’s studies. And a student studying for a PhD is a PhD student, a student of a specific department. They might still have the option of taking classes outside of that department (for example, I took classes in science communication). But these are special exceptions. A linguistics PhD student will take almost all of their classes from the linguistics department, a physics PhD student will take almost all of their classes from the physics department. They don’t have majors.

So the next time you write a story with people with advanced degrees, keep this in mind. Majors are a thing for US bachelor’s degrees, and a few similar systems. Anything else, don’t call it a major!

The Machine Learning for Physics Recipe

Last week, I went to a conference on machine learning for physics. Machine learning covers a huge variety of methods and ideas, several of which were on full display. But again and again, I noticed a pattern. The people who seemed to be making the best use of machine learning, the ones who were the most confident in their conclusions and getting the most impressive results, the ones who felt like they had a whole assembly line instead of just a prototype, all of them were doing essentially the same thing.

This post is about that thing. If you want to do machine learning in physics, these are the situations where you’re most likely to see a benefit. You can do other things, and they may work too. But this recipe seems to work over and over again.

First, you need simulations, and you need an experiment.

Your experiment gives you data, and that data isn’t easy to interpret. Maybe you’ve embedded a bunch of cameras in the antarctic ice, and your data tells you when they trigger and how bright the light is. Maybe you’ve surrounded a particle collision with layers silicon, and your data tells you how much electric charge the different layers absorb. Maybe you’ve got an array of telescopes focused on a black hole far far away, and your data are pixels gathered from each telescope.

You want to infer, from your data, what happened physically. Your cameras in the ice saw signs of a neutrino, you want to know how much energy it had and where it was coming from. Your silicon is absorbing particles, what kind are they and what processes did they come from? The black hole might have the rings predicted by general relativity, but it might have weirder rings from a variant theory.

In each case, you can’t just calculate the answer you need. The neutrino streams past, interacting with the ice and camera positions in unpredictable ways. People can write down clean approximations for particles in the highest-energy part of a collision, but once they start cooling down the process becomes so messy that no straightforward formula describes them. Your array of telescopes fuzz and pixellate and have to be assembled together in a complicated way, so that there is no one guaranteed answer you can find to establish what they saw.

In each case, though, you can use simulations. If you specify in advance the energy and path of the neutrino, you can use a computer to predict how much light your cameras should see. If you know what particles you started with, you can run sophisticated particle physics code to see what “showers” of particles you eventually find. If you have the original black hole image, you can fuzz and pixellate and take it apart to match what your array of telescopes will do.

The problem is, for the experiments, you can’t anticipate, and you don’t know in advance. And simulations, while cheaper than experiments, aren’t cheap. You can’t run a simulation for every possible input and then check them against the experiments. You need to fill in the gaps, run some simulations and then use some theory, some statistical method or human-tweaked guess, to figure out how to interpret your experiments.

Or, you can use Machine Learning. You train a machine learning model, one well-suited the task (anything from the old standby of boosted decision trees to an old fad of normalizing flows to the latest hotness of graph neural networks). You run a bunch of simulations, as many as you can reasonably afford, and you use that data for training, making a program that matches the input data you want to find with its simulated results. This program will be less reliable than your simulations, but it will run much faster. If it’s reliable enough, you can use it instead of the old human-made guesses and tweaks. You now have an efficient, reliable way to go from your raw experiment data to the physical questions you actually care about.

Crucially, each of the elements in this recipe is essential.

You need a simulation. If you just have an experiment with no simulation, then you don’t have a way to interpret the results, and training a machine to reproduce the experiment won’t tell you anything new.

You need an experiment. If you just have simulations, training a machine to reproduce them also doesn’t tell you anything new. You need some reason to want to predict the results of the simulations, beyond just seeing what happens in between which the machine can’t tell you.

And you need to not have anything better than the simulation. If you have a theory where you can write out formulas for what happens then you don’t need machine learning, you can interpret the experiments more easily without it. This applies if you’ve carefully designed your experiment to measure something easy to interpret, like the ratio of rates of two processes that should be exactly the same.

These aren’t the only things you need. You also need to do the whole thing carefully enough that you understand well your uncertainties, not just what the machine predicts but how often it gets it wrong, and whether it’s likely to do something strange when you use it on the actual experiment. But if you can do that, you have a reliable recipe, one many people have followed successfully before. You have a good chance of making things work.

This isn’t the only way physicists can use machine learning. There are people looking into something more akin to what’s called unsupervised learning, where you look for strange events in your data as clues for what to investigate further. And there are people like me, trying to use machine learning on the mathematical side, to guess new formulas and new heuristics. There is likely promise in many of these approaches. But for now, they aren’t a recipe.

HAMLET-Physics 2024

Back in January, I announced I was leaving France and leaving academia. Since then, it hasn’t made much sense for me to go to conferences, even the big conference of my sub-field or the conference I organized.

I did go to a conference this week, though. I had two excuses:

  1. The conference was here in Copenhagen, so no travel required.
  2. The conference was about machine learning.

HAMLET-Physics, or How to Apply Machine Learning to Experimental and Theoretical Physics, had the additional advantage of having an amusing acronym. Thanks to generous support by Carlsberg and the Danish Data Science Academy, they could back up their choice by taking everyone on a tour of Kronborg (better known in the English-speaking world as Elsinore).

This conference’s purpose was to bring together physicists who use machine learning, machine learning-ists who might have something useful to say to those physicists, and other physicists who don’t use machine learning yet but have a sneaking suspicion they might have to at some point. As a result, the conference was super-interdisciplinary, with talks by people addressing very different problems with very different methods.

Interdisciplinary conferences are tricky. It’s easy for the different groups of people to just talk past each other: everyone shows up, gives the same talk they always do, socializes with the same friends they always meet, then leaves.

There were a few talks that hit that mold, and were so technical only a few people understood. But most were better. The majority of the speakers did really well at presenting their work in a way that would be understandable and even exciting to people outside their field, while still having enough detail that we all learned something. I was particularly impressed by Thea Aarestad’s keynote talk on Tuesday, a really engaging view of how machine learning can be used under the extremely tight time constraints LHC experiments need to decide whether to record incoming data.

For the social aspect, the organizers had a cute/gimmicky/machine-learning-themed solution. Based on short descriptions and our public research profiles, they clustered attendees, plotting the connections between them. They then used ChatGPT to write conversation prompts between any two people on the basis of their shared interests. In practice, this turned out to be amusing but totally unnecessary. We were drawn to speak to each other not by conversation prompts, but by a drive to learn from each other. “Why do you do it that way?” was a powerful conversation-starter, as was “what’s the best way to do this?” Despite the different fields, the shared methodologies gave us strong reasons to talk, and meant that people were very rarely motivated to pick one of ChatGPT’s “suggestions”.

Overall, I got a better feeling for how machine learning is useful in physics (and am planning a post on that in future). I also got some fresh ideas for what to do myself, and a bit of a picture of what the future holds in store.

Why Quantum Gravity Is Controversial

Merging quantum mechanics and gravity is a famously hard physics problem. Explaining why merging quantum mechanics and gravity is hard is, in turn, a very hard science communication problem. The more popular descriptions tend to lead to misunderstandings, and I’ve posted many times over the years to chip away at those misunderstandings.

Merging quantum mechanics and gravity is hard…but despite that, there are proposed solutions. String Theory is supposed to be a theory of quantum gravity. Loop Quantum Gravity is supposed to be a theory of quantum gravity. Asymptotic Safety is supposed to be a theory of quantum gravity.

One of the great virtues of science and math is that we are, eventually, supposed to agree. Philosophers and theologians might argue to the end of time, but in math we can write down a proof, and in science we can do an experiment. If we don’t yet have the proof or the experiment, then we should reserve judgement. Either way, there’s no reason to get into an unproductive argument.

Despite that, string theorists and loop quantum gravity theorists and asymptotic safety theorists, famously, like to argue! There have been bitter, vicious, public arguments about the merits of these different theories, and decades of research doesn’t seem to have resolved them. To an outside observer, this makes quantum gravity seem much more like philosophy or theology than like science or math.

Why is there still controversy in quantum gravity? We can’t do quantum gravity experiments, sure, but if that were the problem physicists could just write down the possibilities and leave it at that. Why argue?

Some of the arguments are for silly aesthetic reasons, or motivated by academic politics. Some are arguments about which approaches are likely to succeed in future, which as always is something we can’t actually reliably judge. But the more justified arguments, the strongest and most durable ones, are about a technical challenge. They’re about something called non-perturbative physics.

Most of the time, when physicists use a theory, they’re working with an approximation. Instead of the full theory, they’re making an assumption that makes the theory easier to use. For example, if you assume that the velocity of an object is small, you can use Newtonian physics instead of special relativity. Often, physicists can systematically relax these assumptions, including more and more of the behavior of the full theory and getting a better and better approximation to the truth. This process is called perturbation theory.

Other times, this doesn’t work well. The full theory has some trait that isn’t captured by the approximations, something that hides away from these systematic tools. The theory has some important aspect that is non-perturbative.

Every proposed quantum gravity theory uses approximations like this. The theory’s proponents try to avoid these approximations when they can, but often they have to approximate and hope they don’t miss too much. The opponents, in turn, argue that the theory’s proponents are missing something important, some non-perturbative fact that would doom the theory altogether.

Asymptotic Safety is built on top of an approximation, one different from what other quantum gravity theorists typically use. To its proponents, work using their approximation suggests that gravity works without any special modifications, that the theory of quantum gravity is easier to find than it seems. Its opponents aren’t convinced, and think that the approximation is missing something important which shows that gravity needs to be modified.

In Loop Quantum Gravity, the critics think their approximation misses space-time itself. Proponents of Loop Quantum Gravity have been unable to prove that their theory, if you take all the non-perturbative corrections into account, doesn’t just roll up all of space and time into a tiny spiky ball. They expect that their theory should allow for a smooth space-time like we experience, but the critics aren’t convinced, and without being able to calculate the non-perturbative physics neither side can convince the other.

String Theory was founded and originally motivated by perturbative approximations. Later, String Theorists figured out how to calculate some things non-perturbatively, often using other simplifications like supersymmetry. But core questions, like whether or not the theory allows a positive cosmological constant, seem to depend on non-perturbative calculations that the theory gives no instructions for how to do. Some critics don’t think there is a consistent non-perturbative theory at all, that the approximations String Theorists use don’t actually approximate to anything. Even within String Theory, there are worries that the theory might try to resist approximation in odd ways, becoming more complicated whenever a parameter is small enough that you could use it to approximate something.

All of this would be less of a problem with real-world evidence. Many fields of science are happy to use approximations that aren’t completely rigorous, as long as those approximations have a good track record in the real world. In general though, we don’t expect evidence relevant to quantum gravity any time soon. Maybe we’ll get lucky, and studies of cosmology will reveal something, or an experiment on Earth will have a particularly strange result. But nature has no obligation to help us out.

Without evidence, though, we can still make mathematical progress. You could imagine someone proving that the various perturbative approaches to String Theory become inconsistent when stitched together into a full non-perturbative theory. Alternatively, you could imagine someone proving that a theory like String Theory is unique, that no other theory can do some key thing that it does. Either of these seems unlikely to come any time soon, and most researchers in these fields aren’t pursuing questions like that. But the fact the debate could be resolved means that it isn’t just about philosophy or theology. There’s a real scientific, mathematical controversy, one rooted in our inability to understand these theories beyond the perturbative methods their proponents use. And while I don’t expect it to be resolved any time soon, one can always hold out hope for a surprise.

Toy Models

In academia, scientists don’t always work with what they actually care about. A lot of the time, they use what academics call toy models. A toy model can be a theory with simpler mathematics than the theories that describe the real world, but it can also be something that is itself real, just simpler or easier to work with, like nematodes, fruit flies, or college students.

Some people in industry seem to think this is all academics do. I’ve seen a few job ads that emphasize experience dealing with “real-world data”, and a few people skeptical that someone used to academia would be able to deal with the messy challenges of the business world.

There’s a grain of truth to this, but I don’t think industry has a monopoly on mess. To see why, let’s think about how academics write computer code.

There are a lot of things that one is in-principle supposed to do to code well, and most academics do none of them. Good code has test suites, so that if you change something you can check whether it still works by testing it on all the things that could go wrong. Good code is modular, with functions doing specific things and re-used whenever appropriate. Good code follows shared conventions, so that others can pick up your code and understand how you did it.

Some academics do these things, for example those who build numerical simulations on supercomputers. But for most academics, coding best-practices range from impractical to outright counterproductive. Testing is perhaps the clearest example. To design a test suite, you have to have some idea what kinds of things your code will run into, what kind of input you expect what the output is supposed to be. Many academic projects, though, are the first of their kind. Academics code up something to do a calculation nobody has done before, not knowing the result, or they make code to analyze a dataset nobody has worked with before. By the time they understand the problem well enough to write a test suite, they’ve already solved the problem, and they’re on to the next project, which may need something totally different.

From the perspective of these academics, if you have a problem well-defined enough that you can build a test suite, well enough that you can have stable conventions and reusable functions…then you have a toy model, not a real problem from the real world.

…and of course, that’s not quite fair either, right?

The truth is, academics and businesspeople want to work with toy models. Toy models are well-behaved, and easy, and you can do a lot with them. The real world isn’t a toy model…but it can be, if you make it one.

This means planning your experiments, whether in business or in science. It means making sure the data you gather is labeled and organized before you begin. It means coming up with processes, and procedures, and making as much of the work as possible a standardized, replicable thing. That’s desirable regardless, whether you’re making a consistent product instead of artisanal one-offs or a well-documented scientific study that another team can replicate.

Academia and industry both must handle mess. They handle different kinds of mess in different circumstances, and manage it in different ways, and this can be a real challenge for someone trying to go from one world to another. But neither world is intrinsically messier or cleaner. Nobody has a monopoly on toy models.