In particle physics, our best model goes under the unimaginative name “Standard Model“. The Standard Model models the world in terms of interactions of different particles, or more properly quantum fields. The fields have different masses and interact with different strengths, and each mass and interaction strength is a parameter: a “free” number in the model, one we have to fix based on data. There are nineteen parameters in the Standard Model (not counting the parameters for massive neutrinos, which were discovered later).
In principle, we could propose a model with more parameters that fit the data better. With enough parameters, one can fit almost anything. That’s cheating, though, and it’s a type of cheating we know how to catch. We have statistical tests that let us estimate how impressed we should be when a model matches the data. If a model is just getting ahead on extra parameters without capturing something real, we can spot that, because it gets a worse score on those tests. A model with a bad score might match the data you used to fix its parameters, but it won’t predict future data, so it isn’t actually useful. Right now the Standard Model (plus neutrino masses) gets the best score on those tests, when fitted to all the data we have access to, so we think of it as our best and most useful model. If someone proposed a model that got a better score, we’d switch: but so far, no-one has managed.
Physicists care about this not just because a good model is useful. We think that the best model is, in some sense, how things really work. The fact that the Standard Model fits the data best doesn’t just mean we can use it to predict more data in the future: it means that somehow, deep down, that the world is made up of quantum fields the way the Standard Model describes.
If you’ve been following developments in machine learning, or AI, you might have heard the word “model” slung around. For example, GPT is a Large Language Model, or LLM for short.
Large Language Models are more like the Standard Model than you might think. Just as the Standard Model models the world in terms of interacting quantum fields, Large Language Models model the world in terms of a network of connections between artificial “neurons”. Just as particles have different interaction strengths, pairs of neurons have different connection weights. Those connection weights are the parameters of a Large Language Model, in the same way that the masses and interaction strengths of particles are the parameters of the Standard Model. The parameters for a Large Language Model are fixed by a giant corpus of text data, almost the whole internet reduced to a string of bytes that the LLM needs to match, in the same way the Standard Model needs to match data from particle collider experiments. The Standard Model has nineteen parameters, Large Language Models have billions.
Increasingly, machine learning models seem to capture things better than other types of models. If you want to know how a protein is going to fold, you can try to make a simplified model of how its atoms and molecules interact with each other…but instead, you can make your model a neural network. And that turns out to work better. If you’re a bank and you want to know how many of your clients will default on their loans, you could ask an economist to make a macroeconomic model…or, you can just make your model a neural network too.
In physics, we think that the best model is the model that is closest to reality. Clearly, though, this can’t be what’s going on here. Real proteins don’t fold based on neural networks, and neither do real economies. Both economies and folding proteins are very complicated, so any model we can use right now won’t be what’s “really going on”, unlike the comparatively simple world of particle physics. Still, it seems weird that, compared to the simplified economic or chemical models, neural networks can work better, even if they’re very obviously not really what’s going on. Is there another way to think about them?
I used to get annoyed at people using the word “AI” to refer to machine learning models. In my mind, AI was the thing that shows up in science fiction, machines that can think as well or better than humans. (The actual term of art for this is AGI, artificial general intelligence.) Machine learning, and LLMs in particular, felt like a meaningful step towards that kind of AI, but they clearly aren’t there yet.
Since then, I’ve been convinced that the term isn’t quite so annoying. The AI field isn’t called AI because they’re creating a human-equivalent sci-fi intelligence. They’re called AI because the things they build are inspired by how human intelligence works.
As humans, we model things with mathematics, but we also model them with our own brains. Consciously, we might think about objects and their places in space, about people and their motivations and actions, about canonical texts and their contents. But all of those things cash out in our neurons. Anything we think, anything we believe, any model we can actually apply by ourselves in our own lives, is a model embedded in a neural network. It’s quite a bit more complicated neural network than an LLM, but it’s very much still a kind of neural network.
Because humans are alright at modeling a variety of things, because we can see and navigate the world and persuade and manipulate each other, we know that neural networks can do these things. A human brain may not be the best model for any given phenomenon: an engineer can model the flight of a baseball with math much better than the best baseball player can with their unaided brain. But human brains still tend to be fairly good models for a wide variety of things. Evolution has selected them to be.
So with that in mind, it shouldn’t be too surprising that neural networks can model things like protein folding. Even if proteins don’t fold based on neural networks, even if the success of AlphaFold isn’t capturing the actual details of the real world the way the Standard Model does, the model is capturing something. It’s loosely capturing the way a human would think about the problem, if you gave that human all the data they needed. And humans are, and remain, pretty good at thinking! So we have reason, not rigorous, but at least intuitive reason, to think that neural networks will actually be good models of things.

