Two big physics experiments consistently make the news. The Large Hadron Collider, or LHC, and the Laser Interferometer Gravitational-Wave Observatory, or LIGO. One collides protons, the other watches colliding black holes and neutron stars. But while this may make the experiments sound quite similar, their goals couldn’t be more different.
The goal of the LHC, put simply, is to discover the rules that govern reality. Should the LHC find a new fundamental particle, it will tell us something we didn’t know about the laws of physics, a newly discovered fact that holds true everywhere in the universe. So far, it has discovered the Higgs boson, and while that particular rule was expected we didn’t know the details until they were tested. Now physicists hope to find something more, a deviation from the Standard Model that hints at a new law of nature altogether.
LIGO, in contrast, isn’t really for discovering the rules of the universe. Instead, it discovers the consequences of those rules, on a grand scale. Even if we knew the laws of physics completely, we can’t calculate everything from those first principles. We can simulate some things, and approximate others, but we need experiments to tweak those simulations and test those approximations. LIGO fills that role. We can try to estimate how common black holes are, and how large, but LIGO’s results were still a surprise, suggesting medium-sized black holes are more common than researchers expected. In the future, gravitational wave telescopes might discover more of these kinds of consequences, from the shape of neutron stars to the aftermath of cosmic inflation.
There are a few exceptions for both experiments. The LHC can also discover the consequences of the laws of physics, especially when those consequences are very difficult to calculate, finding complicated arrangements of known particles, like pentaquarks and glueballs. And it’s possible, though perhaps not likely, that LIGO could discover something about quantum gravity. Quantum gravity’s effects are expected to be so small that these experiments won’t see them, but some have speculated that an unusually large effect could be detected by a gravitational wave telescope.
As scientists, we want to know everything we can about everything we find. We want to know the basic laws that govern the universe, but we also want to know the consequences of those laws, the story of how our particular universe came to be the way it is today. And luckily, we have experiments for both.
For Halloween, this blog has a tradition of covering “thespookyside” of physics. This year, I’m bringing in a concept from biology to ask a spooky physics “what if?”
In the 1950’s, biologists discovered that birds were susceptible to a worryingly effective trick. By giving them artificial eggs larger and brighter than their actual babies, they found that the birds focused on the new eggs to the exclusion of their own. They couldn’t help trying to hatch the fake eggs, even if they were so large that they would fall off when they tried to sit on them. The effect, since observed in other species, became known as a supernormal stimulus, or superstimulus.
Can this happen to humans? Some think so. They worry about junk food we crave more than actual nutrients, or social media that eclipses our real relationships. Naturally, this idea inspires horror writers, who write about haunting music you can’t stop listening to, or holes in a wall that “fit” so well you’re compelled to climb in.
(And yes, it shows up in porn as well.)
But this is a physics blog, not a biology blog. What kind of superstimulus would work on physicists?
Well for one, this sounds a lot like some criticisms of string theory. Instead of a theory that just unifies some forces, why not unify all the forces? Instead of just learning some advanced mathematics, why not learn more, and more? And if you can’t be falsified by any experiment, well, all that would do is spoil the fun, right?
Do I actually think that string theory is a superstimulus, that astrophysics or particle physics is a superstimulus? In a word, no. Much as it might look that way from the news coverage, most physicists don’t work on these big, flashy questions. Far from being lured in by irresistible super-scale problems, most physicists work with tabletop experiments and useful materials. For those of us who do look up at the sky or down at the roots of the world, we do it not just because it’s compelling but because it has a good track record: physics wouldn’t exist if Newton hadn’t cared about the orbits of the planets. We study extremes because they advance our understanding of everything else, because they give us steam engines and transistors and change everyone’s lives for the better.
Then again, if I had fallen victim to a superstimulus, I’d say that anyway, right?
The thing is, this is a metaphor. What’s more, it’s a metaphor for an approximation. As physicists, when we draw diagrams with more and more virtual particles, we’re trying to use something we know how to calculate with (particles) to understand something tougher to handle (interacting quantum fields). Virtual particles, at least as you’re probably picturing them, don’t really exist.
I don’t really blame physicists for talking like that, though. Virtual particles are a metaphor, sure, a way to talk about a particular calculation. But so is basically anything we can say about quantum field theory. In quantum field theory, it’s pretty tough to say which things “really exist”.
You might have heard that there are three types of neutrinos, corresponding to the three “generations” of the Standard Model: electron-neutrinos, muon-neutrinos, and tau-neutrinos. Each is produced in particular kinds of reactions: electron-neutrinos, for example, get produced by beta-plus decay, when a proton turns into a neutron, an anti-electron, and an electron-neutrino.
Leave these neutrinos alone though, and something strange happens. Detect what you expect to be an electron-neutrino, and it might have changed into a muon-neutrino or a tau-neutrino. The neutrino oscillated.
Why does this happen?
One way to explain it is to say that electron-neutrinos, muon-neutrinos, and tau-neutrinos don’t “really exist”. Instead, what really exists are neutrinos with specific masses. These don’t have catchy names, so let’s just call them neutrino-one, neutrino-two, and neutrino-three. What we think of as electron-neutrinos, muon-neutrinos, and tau-neutrinos are each some mix (a quantum superposition) of these “really existing” neutrinos, specifically the mixes that interact nicely with electrons, muons, and tau leptons respectively. When you let them travel, it’s these neutrinos that do the traveling, and due to quantum effects that I’m not explaining here you end up with a different mix than you started with.
This probably seems like a perfectly reasonable explanation. But it shouldn’t. Because if you take one of these mass-neutrinos, and interact with an electron, or a muon, or a tau, then suddenly it behaves like a mix of the old electron-neutrinos, muon-neutrinos, and tau-neutrinos.
That’s because both explanations are trying to chop the world up in a way that can’t be done consistently. There aren’t electron-neutrinos, muon-neutrinos, and tau-neutrinos, and there aren’t neutrino-ones, neutrino-twos, and neutrino-threes. There’s a mathematical object (a vector space) that can look like either.
Whether you’re comfortable with that depends on whether you think of mathematical objects as “things that exist”. If you aren’t, you’re going to have trouble thinking about the quantum world. Maybe you want to take a step back, and say that at least “fields” should exist. But that still won’t do: we can redefine fields, add them together or even use more complicated functions, and still get the same physics. The kinds of things that exist can’t be like this. Instead you end up invoking another kind of mathematical object, equivalence classes.
If you want to be totally rigorous, you have to go a step further. You end up thinking of physics in a very bare-bones way, as the set of all observations you could perform. Instead of describing the world in terms of “these things” or “those things”, the world is a black box, and all you’re doing is finding patterns in that black box.
Is there a way around this? Maybe. But it requires thought, and serious philosophy. It’s not intuitive, it’s not easy, and it doesn’t lend itself well to 3d animations in documentaries. So in practice, whenever anyone tells you about something in physics, you can be pretty sure it’s a metaphor. Nice describable, non-mathematical things typically don’t exist.
Listen to a certain flavor of crackpot, or a certain kind of science fiction, and you’ll hear about zero-point energy. Limitless free energy drawn from quantum space-time itself, zero-point energy probably sounds like bullshit. Often it is. But lurking behind the pseudoscience and the fiction is a real physics concept, albeit one that doesn’t really work like those people imagine.
In quantum mechanics, the zero-point energy is the lowest energy a particular system can have. That number doesn’t actually have to be zero, even for empty space. People sometimes describe this in terms of so-called virtual particles, popping up from nothing in particle-antiparticle pairs only to annihilate each other again, contributing energy in the absence of any “real particles”. There’s a real force, the Casimir effect, that gets attributed to this, a force that pulls two metal plates together even with no charge or extra electromagnetic field. The same bubbling of pairs of virtual particles also gets used to explain the Hawking radiation of black holes.
I’d like to try explaining all of these things in a different way, one that might clear up some common misconceptions. To start, let’s talk about, not zero-point energy, but zero-point diagrams.
Feynman diagrams are a tool we use to study particle physics. We start with a question: if some specific particles come together and interact, what’s the chance that some (perhaps different) particles emerge? We start by drawing lines representing the particles going in and out, then connect them in every way allowed by our theory. Finally we translate the diagrams to numbers, to get an estimate for the probability. In particle physics slang, the number of “points” is the total number of particles: particles in, plus particles out. For example, let’s say we want to know the chance that two electrons go in and two electrons come out. That gives us a “four-point” diagram: two in, plus two out. A zero-point diagram, then, means zero particles in, zero particles out.
(Note that this isn’t why zero-point energy is called zero-point energy, as far as I can tell. Zero-point energy is an older term from before Feynman diagrams.)
Remember, each Feynman diagram answers a specific question, about the chance of particles behaving in a certain way. You might wonder, what question does a zero-point diagram answer? The chance that nothing goes to nothing? Why would you want to know that?
To answer, I’d like to bring up some friends of mine, who do something that might sound equally strange: they calculate one-point diagrams, one particle goes to none. This isn’t strange for them because they study theories with defects.
Normally in particle physics, we think about our particles in an empty, featureless space. We don’t have to, though. One thing we can do is introduce features in this space, like walls and mirrors, and try to see what effect they have. We call these features “defects”.
If there’s a defect like that, then it makes sense to calculate a one-point diagram, because your one particle can interact with something that’s not a particle: it can interact with the defect.
You might see where this is going: let’s say you think there’s a force between two walls, that comes from quantum mechanics, and you want to calculate it. You could imagine it involves a diagram like this:
Roughly speaking, this is the kind of thing you could use to calculate the Casimir effect, that mysterious quantum force between metal plates. And indeed, it involves a zero-point diagram.
Here’s the thing, though: metal plates aren’t just “defects”. They’re real physical objects, made of real physical particles. So while you can think of the Casimir effect with a “zero-point diagram” like that, you can also think of it with a normal diagram, more like the four-point diagram I showed you earlier: one that computes, not a force between defects, but a force between the actual electrons and protons that make up the two plates.
A lot of the time when physicists talk about pairs of virtual particles popping up out of the vacuum, they have in mind a picture like this. And often, you can do the same trick, and think about it instead as interactions between physical particles. There’s a story of roughly this kind for Hawking radiation: you can think of a black hole event horizon as “cutting in half” a zero-point diagram, and see pairs of particles going out from the black hole…but you can also do a calculation that looks more like particles interacting with a gravitational field.
This also might help you understand why, contra the crackpots and science fiction writers, zero-point energy isn’t a source of unlimited free energy. Yes, a force like the Casimir effect comes “from the vacuum” in some sense. But really, it’s a force between two particles. And just like the gravitational force between two particles, this doesn’t give you unlimited free power. You have to do the work to move the particles back over and over again, using the same amount of power you gained from the force to begin with. And unlike the forces you’re used to, these are typically very small effects, as usual for something that depends on quantum mechanics. So it’s even less useful than more everyday forces for this.
Why do so many crackpots and authors expect zero-point energy to be a massive source of power? In part, this is due to mistakes physicists made early on.
Sometimes, when calculating a zero-point diagram (or any other diagram), we don’t get a sensible number. Instead, we get infinity. Physicists used to be baffled by this. Later, they understood the situation a bit better, and realized that those infinities were probably just due to our ignorance. We don’t know the ultimate high-energy theory, so it’s possible something happens at high energies to cancel those pesky infinities. Without knowing exactly what happened, physicists would estimate by using a “cutoff” energy where they expected things to change.
That kind of calculation led to an estimate you might have heard of, that the zero-point energy inside single light bulb could boil all the world’s oceans. That estimate gives a pretty impressive mental image…but it’s also wrong.
This kind of estimate led to “the worst theoretical prediction in the history of physics”, that the cosmological constant, the force that speeds up the expansion of the universe, is 120 orders of magnitude higher than its actual value (if it isn’t just zero). If there really were energy enough inside each light bulb to boil the world’s oceans, the expansion of the universe would be quite different than what we observe.
At this point, it’s pretty clear there is something wrong with these kinds of “cutoff” estimates. The only unclear part is whether that’s due to something subtle or something obvious. But either way, this particular estimate is just wrong, and you shouldn’t take it seriously. Zero-point energy exists, but it isn’t the magical untapped free energy you hear about in stories. It’s tiny quantum corrections to the forces between particles.
On my “Who Am I?” page, I open with my background, calling myself a string theorist, then clarify: “in practice I’m more of a Particle Theorist, describing the world not in terms of short lengths of string but rather with particles that each occupy a single point in space”.
When I wrote that I didn’t think it would confuse people. Now that I’m older and wiser, I know people can be confused in a variety of ways. And since I recently saw someone confused about this particular phrase (yes I’m vagueblogging, but I suspect you’re reading this and know who you are 😉 ), I figured I’d explain it.
If you’ve learned a few things about quantum mechanics, maybe you have this slogan in mind:
“What we used to think of as particles are really waves. They spread out over an area, with peaks and troughs that interfere, and you never know exactly where you will measure them.”
With that in mind, my talk of “particles that each occupy a single point” doesn’t make sense. Doesn’t the slogan mean that particles don’t exist?
Here’s the thing: that’s the wrong slogan. The right slogan is just a bit different:
“What we used to think of as particles are ALSO waves. They spread out over an area, with peaks and troughs that interfere, and you never know exactly where you will measure them.”
The principle you were remembering is often called “wave-particle duality“. That doesn’t mean “particles don’t exist”. It means “waves and particles are the same thing”.
This matters, because just as wave-like properties are important, particle-like properties are important. And while it’s true that you can never know exactly where you will measure a particle, it’s also true that it’s useful, and even necessary, to think of it as occupying a single point.
That’s because particles can only affect each other when they’re at the same point. Physicists call this the principle of locality, the idea that there is no real “action at a distance”, everything happens because of something traveling from point A to point B. Wave-particle duality doesn’t change that, it just makes the specific point uncertain. It means you have to add up over every specific point where the particles could have interacted, but each term in your sum has to still involve a specific point: quantum mechanics doesn’t let particles affect each other non-locally.
Strings, in turn, are a little bit different. Strings have length, particles don’t. Particles interact at a point, strings can interact anywhere along the string. Strings introduce a teeny bit of non-locality.
When you compare particles and waves, you’re thinking pre-quantum mechanics, two classical things neither of which is the full picture. When you compare particles and strings, both are quantum, both are also waves. But in a meaningful sense one occupies a single point, and the other doesn’t.
Recently, a commenter asked me what physicists mean when they say two forces unify. While typing up a response, I came across this passage, in a science fiction short story by Ted Chiang.
Physics admits of a lovely unification, not just at the level of fundamental forces, but when considering its extent and implications. Classifications like ‘optics’ or ‘thermodynamics’ are just straitjackets, preventing physicists from seeing countless intersections.
This passage sounds nice enough, but I feel like there’s a misunderstanding behind it. When physicists seek after unification, we’re talking about something quite specific. It’s not merely a matter of two topics intersecting, or describing them with the same math. We already plumb intersections between fields, including optics and thermodynamics. When we hope to find a unified theory, we do so because it does something. A real unified theory doesn’t just aid our calculations, it gives us new ways to alter the world.
To show you what I mean, let me start with something physicists already know: electroweak unification.
You might have heard of four fundamental forces: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. You might have also heard that two of these forces are unified: the electromagnetic force and the weak nuclear force form something called the electroweak force.
What does it mean that these forces are unified? How does it work?
Zoom in far enough, and you don’t see the electromagnetic force and the weak force anymore. Instead you see two different forces, I’ll call them “W” and “B”. You’ll also see the Higgs field. And crucially, you’ll see the “W” and “B” forces interact with the Higgs.
The Higgs field is special because it has what’s called a “vacuum” value. Even in otherwise empty space, there’s some amount of “Higgsness” in the background, like the color of a piece of construction paper. This background Higgs-ness is in some sense an accident, just one stable way the universe happens to sit. In particular, it picks out an arbitrary kind of direction: parts of the “W” and “B” forces happen to interact with it, and parts don’t.
Now let’s zoom back out. We could, if we wanted, keep our eyes on the “W” and “B” forces. But that gets increasingly silly. As we zoom out we won’t be able to see the Higgs field anymore. Instead, we’ll just see different parts of the “W” and “B” behaving in drastically different ways, depending on whether or not they interact with the Higgs. It will make more sense to talk about mixes of the “W” and “B” fields, to distinguish the parts that are “lined up” with the background Higgs and the parts that aren’t. It’s like using “aft” and “starboard” on a boat. You could use “north” and “south”, but that would get confusing pretty fast.
What are those “mixes” of the “W” and “B” forces? Why, they’re the weak nuclear force and the electromagnetic force!
This, broadly speaking, is the kind of unification physicists look for. It doesn’t have to be a “mix” of two different forces: most of the models physicists imagine start with a single force. But the basic ideas are the same: that if you “zoom in” enough you see a simpler model, but that model is interacting with something that “by accident” picks a particular direction, so that as we zoom out different parts of the model behave in different ways. In that way, you could get from a single force to all the different forces we observe.
That “by accident” is important here, because that accident can be changed. That’s why I said earlier that real unification lets us alter the world.
To be clear, we can’t change the background Higgs field with current technology. The biggest collider we have can just make a tiny, temporary fluctuation (that’s what the Higgs boson is). But one implication of electroweak unification is that, with enough technology, we could. Because those two forces are unified, and because that unification is physical, with a physical cause, it’s possible to alter that cause, to change the mix and change the balance. This is why this kind of unification is such a big deal, why it’s not the sort of thing you can just chalk up to “interpretation” and ignore: when two forces are unified in this way, it lets us do new things.
Mathematical unification is valuable. It’s great when we can look at different things and describe them in the same language, or use ideas from one to understand the other. But it’s not the same thing as physical unification. When two forces really unify, it’s an undeniable physical fact about the world. When two forces unify, it does something.
There are two kinds of theoretical physicists. Some, called phenomenologists, make predictions about the real world. Others, the so-called “formal theorists”, don’t. They work with the same kinds of theories as the phenomenologists, quantum field theories of the sort that have been so successful in understanding the subatomic world. But the specific theories they use are different: usually, toy models that aren’t intended to describe reality.
Most people get this is valuable. It’s useful to study toy models, because they help us tackle the real world. But they stumble on another point. Sure, they say, you can study toy models…but then you should call yourself a mathematician, not a physicist.
I’m a “formal theorist”. And I’m very much not a mathematician, I’m definitely a physicist. Let me explain why, with an analogy.
As an undergrad, I spent some time working in a particle physics lab. The lab had developed a new particle detector chip, designed for a future experiment: the International Linear Collider. It was my job to test this chip.
Naturally, I couldn’t test the chip by flinging particles at it. For one, the collider it was designed for hadn’t been built yet! Instead, I had to use simulated input: send in electrical signals that mimicked the expected particles, and see what happens. In effect, I was using a kind of toy model, as a way to understand better how the chip worked.
I hope you agree that this kind of work counts as physics. It isn’t “just engineering” to feed simulated input into a chip. Not when the whole point of that chip is to go into a physics experiment. This kind of work is a large chunk of what an experimental physicist does.
As a formal theorist, my work with toy models is an important part of what a theoretical physicist does. I test out the “devices” of theoretical physics, the quantum-field-theoretic machinery that we use to investigate the world. Without that kind of careful testing on toy models, we’d have fewer tools to work with when we want to understand reality.
Ok, but you might object: an experimental physicist does eventually build the real experiment. They don’t just spend their career on simulated input. If someone only works on formal theory, shouldn’t that at least make them a mathematician, not a physicist?
Here’s the thing, though: after those summers in that lab, I didn’t end up as an experimental physicist. After working on that chip, I didn’t go on to perfect it for the International Linear Collider. But it would be rather bizarre if that, retroactively, made my work in that time “engineering” and not “physics”.
Oh, I should also mention that the International Linear Collider might not ever be built. So, there’s that.
Formal theory is part of physics because it cares directly about the goals of physics: understanding the real world. It is just one step towards that goal, it doesn’t address the real world alone. But neither do the people testing out chips for future colliders. Formal theory isn’t always useful, similarly, planned experiments don’t always get built. That doesn’t mean it’s not physics.
Sabine Hossenfelder had an explainer video recently on how to tell science from pseudoscience. This is a famously difficult problem, so naturally we have different opinions. I actually think the picture she draws is reasonably sound. But while it is a good criterion to tell whether you yourself are doing pseudoscience, it’s surprisingly tricky to apply it to other people.
Hossenfelder argues that science, at its core, is about explaining observations. To tell whether something is science or pseudoscience you need to ask, first, if it agrees with observations, and second, if it is simpler than those observations. In particular, a scientist should prefer models with fewer parameters. If your model has so many parameters that you can fit any observation, you’re not being scientific.
This is a great rule of thumb, one that as Hossenfelder points out forms the basis of a whole raft of statistical techniques. It does rely on one tricky judgement, though: how many parameters does your model actually have?
Suppose I’m one of those wacky theorists who propose a whole new particle to explain some astronomical mystery. Hossenfelder, being more conservative in these things, proposes a model with no new particles. Neither of our models fit the data perfectly. Perhaps my model fits a little better, but after all it has one extra parameter, from the new particle. If we want to compare our models, we should take that into account, and penalize mine.
Here’s the question, though: how do I know that Hossenfelder didn’t start out with more particles, and got rid of them to get a better fit? If she did, she had more parameters than I did. She just fit them away.
The problem here is closely related to one called the look-elsewhere effect. Scientists don’t publish everything they try. An unscrupulous scientist can do a bunch of different tests until one of them randomly works, and just publish that one, making the result look meaningful when really it was just random chance. Even if no individual scientist is unscrupulous, a community can do the same thing: many scientists testing many different models, until one accidentally appears to work.
As a scientist, you mostly know if your motivations are genuine. You know if you actually tried a bunch of different models or had good reasons from the start to pick the one you did. As someone judging other scientists, you often don’t have that luxury. Sometimes you can look at prior publications and see all the other attempts someone made. Sometimes they’ll even tell you explicitly what parameters they used and how they fit them. But sometimes, someone will swear up and down that their model is just the most natural, principled choice they could have made, and they never considered anything else. When that happens, how do we guard against the look-elsewhere effect?
The normal way to deal with the look-elsewhere effect is to consider, not just whatever tests the scientist claims to have done, but all tests they could reasonably have done. You need to count all the parameters, not just the ones they say they varied.
This works in some fields. If you have an idea of what’s reasonable and what’s not, you have a relatively manageable list of things to look at. You can come up with clear rules for which theories are simpler than others, and people will agree on them.
Physics doesn’t have it so easy. We don’t have any pre-set rules for what kind of model is “reasonable”. If we want to parametrize every “reasonable” model, the best we can do are what are called Effective Field Theories, theories which try to describe every possible type of new physics in terms of its effect on the particles we already know. Even there, though, we need assumptions. The most popular effective field theory, called SMEFT, assumes the forces of the Standard Model keep their known symmetries. You get a different model if you relax that assumption, and even that model isn’t the most general: for example, it still keeps relativity intact. Try to make the most general model possible, and you end up waist-deep in parameter soup.
Subjectivity is a dirty word in science…but as far as I can tell it’s the only way out of this. We can try to count parameters when we can, and use statistical tools…but at the end of the day, we still need to make choices. We need to judge what counts as an extra parameter and what doesn’t, which possible models to compare to and which to ignore. That’s going to be dependent on our scientific culture, on fashion and aesthetics, there just isn’t a way around that. The best we can do is own up to our assumptions, and be ready to change them when we need to.
In the past, what did we know about eel reproduction? What do we know now?
The answer to both questions is, surprisingly little! For those who don’t know the story, I recommend this New Yorker article. Eels turn out to have a quite complicated life cycle, and can only reproduce in the very last stage. Different kinds of eels from all over Europe and the Americas spawn in just one place: the Sargasso Sea. But while researchers have been able to find newborn eels in those waters, and more recently track a few mature adults on their migration back, no-one has yet observed an eel in the act. Biologists may be able to infer quite a bit, but with no direct evidence yet the truth may be even more surprising than they expect. The details of eel reproduction are an ongoing mystery, the “eel question” one of the field’s most enduring.
But of course this isn’t an eel blog. I’m here to answer a different question.
In the past, what did we know about the Higgs boson? What do we know now?
Ask some physicists, and they’ll say that even before the LHC everyone knew the Higgs existed. While this isn’t quite true, it is certainly true that something like the Higgs boson had to exist. Observations of other particles, the W and Z bosons in particular, gave good evidence for some kind of “Higgs mechanism”, that gives other particles mass in a “Higgs-like-way”. A Higgs boson was in some sense the simplest option, but there could have been more than one, or a different sort of process instead. Some of these alternatives may have been sensible, others as silly as believing that eels come from horses’ tails. Until 2012, when the Higgs boson was observed, we really didn’t know.
We also didn’t know one other piece of information: the Higgs boson’s mass. That tells us, among other things, how much energy we need to make one. Physicists were pretty sure the LHC was capable of producing a Higgs boson, but they weren’t sure where or how they’d find it, or how much energy would ultimately be involved.
Now thanks to the LHC, we know the mass of the Higgs boson, and we can rule out some of the “alternative” theories. But there’s still quite a bit we haven’t observed. In particular, we haven’t observed many of the Higgs boson’s couplings.
The couplings of a quantum field are how it interacts, both with other quantum fields and with itself. In the case of the Higgs, interacting with other particles gives those particles mass, while interacting with itself is how it itself gains mass. Since we know the masses of these particles, we can infer what these couplings should be, at least for the simplest model. But, like the eels, the truth may yet surprise us. Nothing guarantees that the simplest model is the right one: what we call simplicity is a judgement based on aesthetics, on how we happen to write models down. Nature may well choose differently. All we can honestly do is parametrize our ignorance.
In the case of the eels, each failure to observe their reproduction deepens the mystery. What are they doing that is so elusive, so impossible to discover? In this, eels are different from the Higgs boson. We know why we haven’t observed the Higgs boson coupling to itself, at least according to our simplest models: we’d need a higher-energy collider, more powerful than the LHC, to see it. That’s an expensive proposition, much more expensive than using satellites to follow eels around the ocean. Because our failure to observe the Higgs self-coupling is itself no mystery, our simplest models could still be correct: as theorists, we probably have it easier than the biologists. But if we want to verify our models in the real world, we have it much harder.
There’s an attitude I keep seeing among physics crackpots. It goes a little something like this:
“Once upon a time, physics had rules. You couldn’t just wave your hands and write down math, you had to explain the world with real physical things.”
What those “real physical things” were varies. Some miss the days when we explained things mechanically, particles like little round spheres clacking against each other. Some want to bring back absolutes: an absolute space, an absolute time, an absolute determinism. Some don’t like the proliferation of new particles, and yearn for the days when everything was just electrons, protons, and neutrons.
In each case, there’s a sense that physicists “cheated”. That, faced with something they couldn’t actually explain, they made up new types of things (fields, relativity, quantum mechanics, antimatter…) instead. That way they could pretend to understand the world, while giving up on their real job, explaining it “the right way”.
I get where this attitude comes from. It does make a certain amount of sense…for other fields.
An an economist, you can propose whatever mathematical models you want, but at the end of the day they have to boil down to actions taken by people. An economist who proposed some sort of “dark money” that snuck into the economy without any human intervention would get laughed at. Similarly, as a biologist or a chemist, you ultimately need a description that makes sense in terms of atoms and molecules. Your description doesn’t actually need to be in terms of atoms and molecules, and often it can’t be: you’re concerned with a different level of explanation. But it should be possible in terms of atoms and molecules, and that puts some constraints on what you can propose.
Why shouldn’t physics have similar constraints?
Suppose you had a mandatory bottom level like this. Maybe everything boils down to ball bearings, for example. What happens when you study the ball bearings?
Your ball bearings have to have some properties: their shape, their size, their weight. Where do those properties come from? What explains them? Who studies them?
Any properties your ball bearings have can be studied, or explained, by physics. That’s physics’s job: to study the fundamental properties of matter. Any “bottom level” is just as fit a subject for physics as anything else, and you can’t explain it using itself. You end up needing another level of explanation.
Maybe you’re objecting here that your favorite ball bearings aren’t up for debate: they’re self-evident, demanded by the laws of mathematics or philosophy.
Here for lack of space, I’ll only say that mathematics and philosophy don’t work that way. Mathematics can tell you whether you’ve described the world consistently, whether the conclusions you draw from your assumptions actually follow. Philosophy can see if you’re asking the right questions, if you really know what you think you know. Both have lessons for modern physics, and you can draw valid criticisms from either. But neither one gives you a single clear way the world must be. Not since the days of Descartes and Kant have people been that naive.
Because of this, physics is doing something a bit different from economics and biology. Each field wants to make models, wants to describe its observations. But in physics, ultimately, those models are all we have. We don’t have a “bottom level”, a backstop where everything has to make sense. That doesn’t mean we can just make stuff up, and whenever possible we understand the world in terms of physics we’ve already discovered. But when we can’t, all bets are off.