Monthly Archives: December 2023

Models, Large Language and Otherwise

In particle physics, our best model goes under the unimaginative name “Standard Model“. The Standard Model models the world in terms of interactions of different particles, or more properly quantum fields. The fields have different masses and interact with different strengths, and each mass and interaction strength is a parameter: a “free” number in the model, one we have to fix based on data. There are nineteen parameters in the Standard Model (not counting the parameters for massive neutrinos, which were discovered later).

In principle, we could propose a model with more parameters that fit the data better. With enough parameters, one can fit almost anything. That’s cheating, though, and it’s a type of cheating we know how to catch. We have statistical tests that let us estimate how impressed we should be when a model matches the data. If a model is just getting ahead on extra parameters without capturing something real, we can spot that, because it gets a worse score on those tests. A model with a bad score might match the data you used to fix its parameters, but it won’t predict future data, so it isn’t actually useful. Right now the Standard Model (plus neutrino masses) gets the best score on those tests, when fitted to all the data we have access to, so we think of it as our best and most useful model. If someone proposed a model that got a better score, we’d switch: but so far, no-one has managed.

Physicists care about this not just because a good model is useful. We think that the best model is, in some sense, how things really work. The fact that the Standard Model fits the data best doesn’t just mean we can use it to predict more data in the future: it means that somehow, deep down, that the world is made up of quantum fields the way the Standard Model describes.

If you’ve been following developments in machine learning, or AI, you might have heard the word “model” slung around. For example, GPT is a Large Language Model, or LLM for short.

Large Language Models are more like the Standard Model than you might think. Just as the Standard Model models the world in terms of interacting quantum fields, Large Language Models model the world in terms of a network of connections between artificial “neurons”. Just as particles have different interaction strengths, pairs of neurons have different connection weights. Those connection weights are the parameters of a Large Language Model, in the same way that the masses and interaction strengths of particles are the parameters of the Standard Model. The parameters for a Large Language Model are fixed by a giant corpus of text data, almost the whole internet reduced to a string of bytes that the LLM needs to match, in the same way the Standard Model needs to match data from particle collider experiments. The Standard Model has nineteen parameters, Large Language Models have billions.

Increasingly, machine learning models seem to capture things better than other types of models. If you want to know how a protein is going to fold, you can try to make a simplified model of how its atoms and molecules interact with each other…but instead, you can make your model a neural network. And that turns out to work better. If you’re a bank and you want to know how many of your clients will default on their loans, you could ask an economist to make a macroeconomic model…or, you can just make your model a neural network too.

In physics, we think that the best model is the model that is closest to reality. Clearly, though, this can’t be what’s going on here. Real proteins don’t fold based on neural networks, and neither do real economies. Both economies and folding proteins are very complicated, so any model we can use right now won’t be what’s “really going on”, unlike the comparatively simple world of particle physics. Still, it seems weird that, compared to the simplified economic or chemical models, neural networks can work better, even if they’re very obviously not really what’s going on. Is there another way to think about them?

I used to get annoyed at people using the word “AI” to refer to machine learning models. In my mind, AI was the thing that shows up in science fiction, machines that can think as well or better than humans. (The actual term of art for this is AGI, artificial general intelligence.) Machine learning, and LLMs in particular, felt like a meaningful step towards that kind of AI, but they clearly aren’t there yet.

Since then, I’ve been convinced that the term isn’t quite so annoying. The AI field isn’t called AI because they’re creating a human-equivalent sci-fi intelligence. They’re called AI because the things they build are inspired by how human intelligence works.

As humans, we model things with mathematics, but we also model them with our own brains. Consciously, we might think about objects and their places in space, about people and their motivations and actions, about canonical texts and their contents. But all of those things cash out in our neurons. Anything we think, anything we believe, any model we can actually apply by ourselves in our own lives, is a model embedded in a neural network. It’s quite a bit more complicated neural network than an LLM, but it’s very much still a kind of neural network.

Because humans are alright at modeling a variety of things, because we can see and navigate the world and persuade and manipulate each other, we know that neural networks can do these things. A human brain may not be the best model for any given phenomenon: an engineer can model the flight of a baseball with math much better than the best baseball player can with their unaided brain. But human brains still tend to be fairly good models for a wide variety of things. Evolution has selected them to be.

So with that in mind, it shouldn’t be too surprising that neural networks can model things like protein folding. Even if proteins don’t fold based on neural networks, even if the success of AlphaFold isn’t capturing the actual details of the real world the way the Standard Model does, the model is capturing something. It’s loosely capturing the way a human would think about the problem, if you gave that human all the data they needed. And humans are, and remain, pretty good at thinking! So we have reason, not rigorous, but at least intuitive reason, to think that neural networks will actually be good models of things.

Newtonmas Pageants

Newtonmas: because if you’re going to celebrate someone supposedly born on December 25, you might as well pick someone whose actual birthday was within two weeks of that.

My past Newtonmas posts have tended to be about gifts, which is a pretty easy theme. But Christmas, for some, isn’t just about Santa Claus delivering gifts, but about someone’s birth. Children put on plays acting out different characters. In Mexico, they include little devils, who try to tempt the shepherds away from visiting Jesus.

Could we do this kind of thing for Newtonmas? A Newtonmas Pageant?

The miraculous child

Historians do know a bit about Newton’s birth. His father (also named Isaac Newton) died two months before he was born. Newton was born prematurely, his mother apparently claimed he could fit inside a quart mug.

The mug may be surprising (it comes in quarts?), but there isn’t really enough material for a proper story here. That said, it would be kind of beside the point if there were. If we’re celebrating science, maybe the story of one particular child is not the story we should be telling.

Instead, we can tell stories about scientific ideas. These often have quite dramatic stories. Instead of running from inn to inn looking for rooms, scientists run from journal to journal trying to publish. Instead of frankincense, myrrh, and gold, there are Nobel prizes. Instead of devils tempting the shepherds away, you have tempting but unproductive ideas. For example, Newton battled ideas from Descartes and Liebniz that suggested gravity could be caused by a vortex of fluid. The idea was popular because it was mechanical-sounding: no invisible force of gravity needed. But it didn’t work, and Newton spent half of the Principia where he wrote down his new science building a theory of fluids so he could say it didn’t work.

So for this Newtonmas, tell the story of a scientific idea: one that had a difficult birth but that, eventually brought pilgrims and gifts from miles around.

Merry Newtonmas, everyone!

If That Measures the Quantum Vacuum, Anything Does

Sabine Hossenfelder has gradually transitioned from critical written content about physics to YouTube videos, mostly short science news clips with the occasional longer piece. Luckily for us in the unable-to-listen-to-podcasts demographic, the transcripts of these videos are occasionally published on her organization’s Substack.

Unluckily, it feels like the short news format is leading to some lazy metaphors. There are stories science journalists sometimes tell because they’re easy and familiar, even if they don’t really make sense. Scientists often tell them too, for the same reason. But the more careful voices avoid them.

Hossenfelder has been that careful before, but one of her recent pieces falls short. The piece is titled “This Experiment Will Measure Nothing, But Very Precisely”.

The “nothing” in the title is the oft-mythologized quantum vacuum. The story goes that in quantum theory, empty space isn’t really empty. It’s full of “virtual” particles, that pop in and out of existence, jostling things around.

This…is not a good way to think about it. Really, it’s not. If you want to understand what’s going on physically, it’s best to think about measurements, and measurements involve particles: you can’t measure anything in pure empty space, you don’t have anything to measure with. Instead, every story you can tell about the “quantum vacuum” and virtual particles, you can tell about interactions between particles that actually exist.

(That post I link above, by the way, was partially inspired by a more careful post by Hossenfelder. She does know this stuff. She just doesn’t always use it.)

Let me tell the story Hossenfelder’s piece is telling, in a less silly way:

In the earliest physics classes, you learn that light does not affect other light. Shine two flashlight beams across each other, and they’ll pass right through. You can trace the rays of each source, independently, keeping track of how they travel and bounce around the room.

In quantum theory, that’s not quite true. Light can interact with light, through subtle quantum effects. This effect is tiny, so tiny it hasn’t been measured before. But with ingenious tricks involving tuning three different lasers in exactly the right way, a team of physicists in Dresden has figured out how it could be done.

And see, that’s already cool, right? It’s cool when people figure out how to see things that have never been seen before, full stop.

But the way Hossenfelder presents it, the cool thing about this is that they are “measuring nothing”. That they’re measuring “the quantum vacuum”, really precisely.

And I mean, you can say that, I guess. But it’s equally true of every subtle quantum effect.

In classical physics, electrons should have a very specific behavior in a magnetic field, called their magnetic moment. Quantum theory changes this: electrons have a slightly different magnetic moment, an anomalous magnetic moment. And people have measured this subtle effect: it’s famously the most precisely confirmed prediction in all of science.

That effect can equally well be described as an effect of the quantum vacuum. You can draw the same pictures, if you really want to, with virtual particles popping in and out of the vacuum. One effect (light bouncing off light) doesn’t exist at all in classical physics, while the other (electrons moving in a magnetic field) exists, but is subtly different. But both, in exactly the same sense, are “measurements of nothing”.

So if you really want to stick on the idea that, whenever you measure any subtle quantum effect, you measure “the quantum vacuum”…then we’re already doing that, all the time. Using it to popularize some stuff (say, this experiment) and not other stuff (the LHC is also measuring the quantum vacuum) is just inconsistent.

Better, in my view, to skip the silly talk about nothing. Talk about what we actually measure. It’s cool enough that way.

What’s in a Subfield?

A while back, someone asked me what my subfield, amplitudeology, is really about. I wrote an answer to that here, a short-term and long-term perspective that line up with the stories we often tell about the field. I talked about how we try to figure out ways to calculate probabilities faster, first for understanding the output of particle colliders like the LHC, then more recently for gravitational wave telescopes. I talked about how the philosophy we use for that carries us farther, how focusing on the minimal information we need to make a prediction gives us hope that we can generalize and even propose totally new theories.

The world doesn’t follow stories, though, not quite so neatly. Try to define something as simple as the word “game” and you run into trouble. Some games have a winner and a loser, some games everyone is on one team, and some games don’t have winners or losers at all. Games can involve physical exercise, computers, boards and dice, or just people telling stories. They can be played for fun, or for money, silly or deadly serious. Most have rules, but some don’t even have that. Instead, games are linked by history: a series of resemblances, people saying that “this” is a game because it’s kind of like “that”.

A subfield isn’t just a word, it’s a group of people. So subfields aren’t defined just by resemblance. Instead, they’re defined by practicality.

To ask what amplitudeology is really about, think about why you might want to call yourself an amplitudeologist. It could be a question of goals, certainly: you might care a lot about making better predictions for the LHC, or you could have some other grand story in mind about how amplitudes will save the world. Instead, though, it could be a matter of training: you learned certain methods, certain mathematics, a certain perspective, and now you apply it to your research, even if it goes further afield from what was considered “amplitudeology” before. It could even be a matter of community, joining with others who you think do cool stuff, even if you don’t share exactly the same goals or the same methods.

Calling yourself an amplitudeologist means you go to their conferences and listen to their talks, means you look to them to collaborate and pay attention to their papers. Those kinds of things define a subfield: not some grand mission statement, but practical questions of interest, what people work on and know and where they’re going with that. Instead of one story, like every other word, amplitudeology has a practical meaning that shifts and changes with time. That’s the way subfields should be: useful to the people who practice them.

What Referees Are For

This week, we had a colloquium talk by the managing editor of the Open Journal of Astrophysics.

The Open Journal of Astrophysics is an example of an arXiv overlay journal. In the old days, journals shouldered the difficult task of compiling scientists’ work into a readable format and sending them to university libraries all over the world so people could stay up to date with the work of distant colleagues. They used to charge libraries for the journals, now some instead charge authors per paper they want to publish.

Now, most of that is unnecessary due to online resources, in my field the arXiv. We prepare our papers using free tools like LaTeX, then upload them to arXiv.org, a website that makes the papers freely accessible for everybody. I don’t think I’ve ever read a paper in a physical journal in my field, and I only check journal websites if I think there’s a mistake in the arXiv version. The rest of the time, I just use the arXiv.

Still, journals do one thing the arXiv doesn’t do, and that’s refereeing. Each paper a journal receives is sent out to a few expert referees. The referees read the paper, and either reject it, accept it as-is, or demand changes before they can accept it. The journal then publishes accepted papers only.

The goal of arXiv overlay journals is to make this feature of journals also unnecessary. To do this, they notice that if every paper is already on arXiv, they don’t need to host papers or print them or typeset them. They just need to find suitable referees, and announce which papers passed.

The Open Journal of Astrophysics is a relatively small arXiv overlay journal. They operate quite cheaply, in part because the people running it can handle most of it as a minor distraction from their day job. SciPost is much bigger, and has to spend more per paper to operate. Still, it spends a lot less than journals charge authors.

We had a spirited discussion after the talk, and someone brought up an interesting point: why do we need to announce which papers passed? Can’t we just publish everything?

What, in the end, are the referees actually for? Why do we need them?

One function of referees is to check for mistakes. This is most important in mathematics, where referees might spend years making sure every step in a proof works as intended. Other fields vary, from theoretical physics (where we can check some things sometimes, but often have to make do with spotting poorly explained parts of a calculation), to fields that do experiments in the real world (where referees can look for warning signs and shady statistics, but won’t actually reproduce the experiment). A mistake found by a referee can be a boon to not just the wider scientific community, but to the author as well. Most scientists would prefer their papers to be correct, so we’re often happy to hear about a genuine mistake.

If this was all referees were for, though, then you don’t actually need to reject any papers. As a colleague of mine suggested, you just need the referees to publish their reports. Then the papers could be published along with comments from the referees, and possibly also responses from the author. Readers could see any mistakes the referees found, and judge for themselves what they show about the result.

Referees already publish their reports in SciPost much of the time, though not currently in the Open Journal of Astrophysics. Both journals still reject some papers, though. In part, that’s because they serve another function: referees are supposed to tell us which papers are “good”.

Some journals are more prestigious and fancy than others. Nature and Science are the most famous, though people in my field almost never bother to publish in either. Still, we have a hierarchy in mind, with Physical Review Letters on the high end and JHEP on the lower one. Publishing in a fancier and more prestigious journal is supposed to say something about you as a scientist, to say that your work is fancier and more prestigious. If you can’t publish in any journal at all, then your work wasn’t interesting enough to merit getting credit for it, and maybe you should have worked harder.

What does that credit buy you? Ostensibly, everything. Jobs are more likely to hire you if you’ve published in more prestigious places, and grant agencies will be more likely to give you money.

In practice, though, this depends a lot on who’s making the decisions. Some people will weigh these kinds of things highly, especially if they aren’t familiar with a candidate’s work. Others will be able to rely on other things, from numbers of papers and citations to informal assessments of a scientist’s impact. I genuinely don’t know whether the journals I published in made any impact at all when I was hired, and I’m a bit afraid to ask. I haven’t yet sat on the kind of committee that makes these decisions, so I don’t know what things look like from the other side either.

But I do know that, on a certain level, journals and publications can’t matter quite as much as we think. As I mentioned, my field doesn’t use Nature or Science, while others do. A grant agency or hiring committee comparing two scientists would have to take that into account, just as they have to take into account the thousands of authors on every single paper by the ATLAS and CMS experiments. If a field started publishing every paper regardless of quality, they’d have to adapt there too, and find a new way to judge people compatible with that.

Can we just publish everything, papers and referee letters and responses and letters and reviews? Maybe. I think there are fields where this could really work well, and fields where it would collapse into the invective of a YouTube comments section. I’m not sure where my own field sits. Theoretical particle physics is relatively small and close-knit, but it’s also cool and popular, with many strong and dumb opinions floating around. I’d like to believe we could handle it, that we could prune back the professional cruft and turn our field into a real conversation between scholars. But I don’t know.