Category Archives: General QFT

Unification That Does Something

I’ve got unification on the brain.

Recently, a commenter asked me what physicists mean when they say two forces unify. While typing up a response, I came across this passage, in a science fiction short story by Ted Chiang.

Physics admits of a lovely unification, not just at the level of fundamental forces, but when considering its extent and implications. Classifications like ‘optics’ or ‘thermodynamics’ are just straitjackets, preventing physicists from seeing countless intersections.

This passage sounds nice enough, but I feel like there’s a misunderstanding behind it. When physicists seek after unification, we’re talking about something quite specific. It’s not merely a matter of two topics intersecting, or describing them with the same math. We already plumb intersections between fields, including optics and thermodynamics. When we hope to find a unified theory, we do so because it does something. A real unified theory doesn’t just aid our calculations, it gives us new ways to alter the world.

To show you what I mean, let me start with something physicists already know: electroweak unification.

There’s a nice series of posts on the old Quantum Diaries blog that explains electroweak unification in detail. I’ll be a bit vaguer here.

You might have heard of four fundamental forces: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. You might have also heard that two of these forces are unified: the electromagnetic force and the weak nuclear force form something called the electroweak force.

What does it mean that these forces are unified? How does it work?

Zoom in far enough, and you don’t see the electromagnetic force and the weak force anymore. Instead you see two different forces, I’ll call them “W” and “B”. You’ll also see the Higgs field. And crucially, you’ll see the “W” and “B” forces interact with the Higgs.

The Higgs field is special because it has what’s called a “vacuum” value. Even in otherwise empty space, there’s some amount of “Higgsness” in the background, like the color of a piece of construction paper. This background Higgs-ness is in some sense an accident, just one stable way the universe happens to sit. In particular, it picks out an arbitrary kind of direction: parts of the “W” and “B” forces happen to interact with it, and parts don’t.

Now let’s zoom back out. We could, if we wanted, keep our eyes on the “W” and “B” forces. But that gets increasingly silly. As we zoom out we won’t be able to see the Higgs field anymore. Instead, we’ll just see different parts of the “W” and “B” behaving in drastically different ways, depending on whether or not they interact with the Higgs. It will make more sense to talk about mixes of the “W” and “B” fields, to distinguish the parts that are “lined up” with the background Higgs and the parts that aren’t. It’s like using “aft” and “starboard” on a boat. You could use “north” and “south”, but that would get confusing pretty fast.

My cabin is on the west side of the ship…unless we’re sailing east….

What are those “mixes” of the “W” and “B” forces? Why, they’re the weak nuclear force and the electromagnetic force!

This, broadly speaking, is the kind of unification physicists look for. It doesn’t have to be a “mix” of two different forces: most of the models physicists imagine start with a single force. But the basic ideas are the same: that if you “zoom in” enough you see a simpler model, but that model is interacting with something that “by accident” picks a particular direction, so that as we zoom out different parts of the model behave in different ways. In that way, you could get from a single force to all the different forces we observe.

That “by accident” is important here, because that accident can be changed. That’s why I said earlier that real unification lets us alter the world.

To be clear, we can’t change the background Higgs field with current technology. The biggest collider we have can just make a tiny, temporary fluctuation (that’s what the Higgs boson is). But one implication of electroweak unification is that, with enough technology, we could. Because those two forces are unified, and because that unification is physical, with a physical cause, it’s possible to alter that cause, to change the mix and change the balance. This is why this kind of unification is such a big deal, why it’s not the sort of thing you can just chalk up to “interpretation” and ignore: when two forces are unified in this way, it lets us do new things.

Mathematical unification is valuable. It’s great when we can look at different things and describe them in the same language, or use ideas from one to understand the other. But it’s not the same thing as physical unification. When two forces really unify, it’s an undeniable physical fact about the world. When two forces unify, it does something.

How the Higgs Is, and Is Not, Like an Eel

In the past, what did we know about eel reproduction? What do we know now?

The answer to both questions is, surprisingly little! For those who don’t know the story, I recommend this New Yorker article. Eels turn out to have a quite complicated life cycle, and can only reproduce in the very last stage. Different kinds of eels from all over Europe and the Americas spawn in just one place: the Sargasso Sea. But while researchers have been able to find newborn eels in those waters, and more recently track a few mature adults on their migration back, no-one has yet observed an eel in the act. Biologists may be able to infer quite a bit, but with no direct evidence yet the truth may be even more surprising than they expect. The details of eel reproduction are an ongoing mystery, the “eel question” one of the field’s most enduring.

But of course this isn’t an eel blog. I’m here to answer a different question.

In the past, what did we know about the Higgs boson? What do we know now?

Ask some physicists, and they’ll say that even before the LHC everyone knew the Higgs existed. While this isn’t quite true, it is certainly true that something like the Higgs boson had to exist. Observations of other particles, the W and Z bosons in particular, gave good evidence for some kind of “Higgs mechanism”, that gives other particles mass in a “Higgs-like-way”. A Higgs boson was in some sense the simplest option, but there could have been more than one, or a different sort of process instead. Some of these alternatives may have been sensible, others as silly as believing that eels come from horses’ tails. Until 2012, when the Higgs boson was observed, we really didn’t know.

We also didn’t know one other piece of information: the Higgs boson’s mass. That tells us, among other things, how much energy we need to make one. Physicists were pretty sure the LHC was capable of producing a Higgs boson, but they weren’t sure where or how they’d find it, or how much energy would ultimately be involved.

Now thanks to the LHC, we know the mass of the Higgs boson, and we can rule out some of the “alternative” theories. But there’s still quite a bit we haven’t observed. In particular, we haven’t observed many of the Higgs boson’s couplings.

The couplings of a quantum field are how it interacts, both with other quantum fields and with itself. In the case of the Higgs, interacting with other particles gives those particles mass, while interacting with itself is how it itself gains mass. Since we know the masses of these particles, we can infer what these couplings should be, at least for the simplest model. But, like the eels, the truth may yet surprise us. Nothing guarantees that the simplest model is the right one: what we call simplicity is a judgement based on aesthetics, on how we happen to write models down. Nature may well choose differently. All we can honestly do is parametrize our ignorance.

In the case of the eels, each failure to observe their reproduction deepens the mystery. What are they doing that is so elusive, so impossible to discover? In this, eels are different from the Higgs boson. We know why we haven’t observed the Higgs boson coupling to itself, at least according to our simplest models: we’d need a higher-energy collider, more powerful than the LHC, to see it. That’s an expensive proposition, much more expensive than using satellites to follow eels around the ocean. Because our failure to observe the Higgs self-coupling is itself no mystery, our simplest models could still be correct: as theorists, we probably have it easier than the biologists. But if we want to verify our models in the real world, we have it much harder.

What I Was Not Saying in My Last Post

Science communication is a gradual process. Anything we say is incomplete, prone to cause misunderstanding. Luckily, we can keep talking, give a new explanation that corrects those misunderstandings. This of course will lead to new misunderstandings. We then explain again, and so on. It sounds fruitless, but in practice our audience nevertheless gets closer and closer to the truth.

Last week, I tried to explain physicists’ notion of a fundamental particle. In particular, I wanted to explain what these particles aren’t: tiny, indestructible spheres, like Democritus imagined. Instead, I emphasized the idea of fields, interacting and exchanging energy, with particles as just the tip of the field iceberg.

I’ve given this kind of explanation before. And when I do, there are two things people often misunderstand. These correspond to two topics which use very similar language, but talk about different things. So this week, I thought I’d get ahead of the game and correct those misunderstandings.

The first misunderstanding: None of that post was quantum.

If you’ve heard physicists explain quantum mechanics, you’ve probably heard about wave-particle duality. Things we thought were waves, like light, also behave like particles, things we thought were particles, like electrons, also behave like waves.

If that’s on your mind, and you see me say particles don’t exist, maybe you think I mean waves exist instead. Maybe when I say “fields”, you think I’m talking about waves. Maybe you think I’m choosing one side of the duality, saying that waves exist and particles don’t.

To be 100% clear: I am not saying that.

Particles and waves, in quantum physics, are both manifestations of fields. Is your field just at one specific point? Then it’s a particle. Is it spread out, with a fixed wavelength and frequency? Then it’s a wave. These are the two concepts connected by wave-particle duality, where the same object can behave differently depending on what you measure. And both of them, to be clear, come from fields. Neither is the kind of thing Democritus imagined.

The second misunderstanding: This isn’t about on-shell vs. off-shell.

Some of you have seen some more “advanced” science popularization. In particular, you might have listened to Nima Arkani-Hamed, of amplituhedron fame, talk about his perspective on particle physics. Nima thinks we need to reformulate particle physics, as much as possible, “on-shell”. “On-shell” means that particles obey their equations of motion, normally quantum calculations involve “off-shell” particles that violate those equations.

To again be clear: I’m not arguing with Nima here.

Nima (and other people in our field) will sometimes talk about on-shell vs off-shell as if it was about particles vs. fields. Normal physicists will write down a general field, and let it be off-shell, we try to do calculations with particles that are on-shell. But once again, on-shell doesn’t mean Democritus-style. We still don’t know what a fully on-shell picture of physics will look like. Chances are it won’t look like the picture of sloshing, omnipresent fields we started with, at least not exactly. But it won’t bring back indivisible, unchangeable atoms. Those are gone, and we have no reason to bring them back.

These Ain’t Democritus’s Particles

Physicists talk a lot about fundamental particles. But what do we mean by fundamental?

The Ancient Greek philosopher Democritus thought the world was composed of fundamental indivisible objects, constantly in motion. He called these objects “atoms”, and believed they could never be created or destroyed, with every other phenomenon explained by different types of interlocking atoms.

The things we call atoms today aren’t really like this, as you probably know. Atoms aren’t indivisible: their electrons can be split from their nuclei, and with more energy their nuclei can be split into protons and neutrons. More energy yet, and protons and neutrons can in turn be split into quarks. Still, at this point you might wonder: could quarks be Democritus’s atoms?

In a word, no. Nonetheless, quarks are, as far as we know, fundamental particles. As it turns out, our “fundamental” is very different from Democritus’s. Our fundamental particles can transform.

Think about beta decay. You might be used to thinking of it in terms of protons and neutrons: an unstable neutron decays, becoming a proton, an electron, and an (electron-anti-)neutrino. You might think that when the neutron decays, it literally “decays”, falling apart into smaller pieces.

But when you look at the quarks, the neutron’s smallest pieces, that isn’t the picture at all. In beta decay, a down quark in the neutron changes, turning into an up quark and an unstable W boson. The W boson then decays into an electron and a neutrino, while the up quark becomes part of the new proton. Even looking at the most fundamental particles we know, Democritus’s picture of unchanging atoms just isn’t true.

Could there be some even lower level of reality that works the way Democritus imagined? It’s not impossible. But the key insight of modern particle physics is that there doesn’t need to be.

As far as we know, up quarks and down quarks are both fundamental. Neither is “made of” the other, or “made of” anything else. But they also aren’t little round indestructible balls. They’re manifestations of quantum fields, “ripples” that slosh from one sort to another in complicated ways.

When we ask which particles are fundamental, we’re asking what quantum fields we need to describe reality. We’re asking for the simplest explanation, the simplest mathematical model, that’s consistent with everything we could observe. So “fundamental” doesn’t end up meaning indivisible, or unchanging. It’s fundamental like an axiom: used to derive the rest.

QCD and Reductionism: Stranger Than You’d Think

Earlier this year, I made a list of topics I wanted to understand. The most abstract and technical of them was something called “Wilsonian effective field theory”. I still don’t understand Wilsonian effective field theory. But while thinking about it, I noticed something that seemed weird. It’s something I think many physicists already understand, but that hasn’t really sunk in with the public yet.

There’s an old problem in particle physics, described in many different ways over the years. Take our theories and try to calculate some reasonable number (say, the angle an electron turns in a magnetic field), and instead of that reasonable number we get infinity. We fix this problem with a process called renormalization that hides that infinity away, changing the “normalization” of some constant like a mass or a charge. While renormalization first seemed like a shady trick, physicists eventually understood it better. First, we thought of it as a way to work around our ignorance, that the true final theory would have no infinities at all. Later, physicists instead thought about renormalization in terms of scaling.

Imagine looking at the world on a camera screen. You can zoom in, or zoom out. The further you zoom out, the more details you’ll miss: they’re just too small to be visible on your screen. You can guess what they might be, but your picture will be different depending on how you zoom.

In particle physics, many of our theories are like that camera. They come with a choice of “zoom setting”, a minimum scale where they still effectively tell the whole story. We call theories like these effective field theories. Some physicists argue that these are all we can ever have: since our experiments are never perfect, there will always be a scale so small we have no evidence about it.

In general, theories can be quite different at different scales. Some theories, though, are especially nice: they look almost the same as we zoom in to smaller scales. The only things that change are the mass of different particles, and their charges.

Trippy

One theory like this is Quantum Chromodynamics (or QCD), the theory of quarks and gluons. Zoom in, and the theory looks pretty much the same, with one crucial change: the force between particles get weaker. There’s a number, called the “coupling constant“, that describes how strong a force is, think of it as sort of like an electric charge. As you zoom in to quarks and gluons, you find you can still describe them with QCD, just with a smaller coupling constant. If you could zoom “all the way in”, the constant (and thus the force between particles) would be zero.

This makes QCD a rare kind of theory: one that could be complete to any scale. No matter how far you zoom in, QCD still “makes sense”. It never gives contradictions or nonsense results. That doesn’t mean it’s actually true: it interacts with other forces, like gravity, that don’t have complete theories, so it probably isn’t complete either. But if we didn’t have gravity or electricity or magnetism, if all we had were quarks and gluons, then QCD could have been the final theory that described them.

And this starts feeling a little weird, when you think about reductionism.

Philosophers define reductionism in many different ways. I won’t be that sophisticated. Instead, I’ll suggest the following naive definition: Reductionism is the claim that theories on larger scales reduce to theories on smaller scales.

Here “reduce to” is intentionally a bit vague. It might mean “are caused by” or “can be derived from” or “are explained by”. I’m gesturing at the sort of thing people mean when they say that biology reduces to chemistry, or chemistry to physics.

What happens when we think about QCD, with this intuition?

QCD on larger scales does indeed reduce to QCD on smaller scales. If you want to ask why QCD on some scale has some coupling constant, you can explain it by looking at the (smaller) QCD coupling constant on a smaller scale. If you have equations for QCD on a smaller scale, you can derive the right equations for a larger scale. In some sense, everything you observe in your larger-scale theory of QCD is caused by what happens in your smaller-scale theory of QCD.

But this isn’t quite the reductionism you’re used to. When we say biology reduces to chemistry, or chemistry reduces to physics, we’re thinking of just a few layers: one specific theory reduces to another specific theory. Here, we have an infinite number of layers, every point on the scale from large to small, each one explained by the next.

Maybe you think you can get out of this, by saying that everything should reduce to the smallest scale. But remember, the smaller the scale the smaller our “coupling constant”, and the weaker the forces between particles. At “the smallest scale”, the coupling constant is zero, and there is no force. It’s only when you put your hand on the zoom nob and start turning that the force starts to exist.

It’s reductionism, perhaps, but not as we know it.

Now that I understand this a bit better, I get some of the objections to my post about naturalness a while back. I was being too naive about this kind of thing, as some of the commenters (particularly Jacques Distler) noted. I believe there’s a way to rephrase the argument so that it still works, but I’d have to think harder about how.

I also get why I was uneasy about Sabine Hossenfelder’s FQXi essay on reductionism. She considered a more complicated case, where the chain from large to small scale could be broken, a more elaborate variant of a problem in Quantum Electrodynamics. But if I’m right here, then it’s not clear that scaling in effective field theories is even the right way to think about this. When you have an infinite series of theories that reduce to other theories, you’re pretty far removed from what most people mean by reductionism.

Finally, this is the clearest reason I can find why you can’t do science without an observer. The “zoom” is just a choice we scientists make, an arbitrary scale describing our ignorance. But without it, there’s no way to describe QCD. The notion of scale is an inherent and inextricable part of the theory, and it doesn’t have to mean our theory is incomplete.

Experts, please chime in here if I’m wrong on the physics here. As I mentioned at the beginning, I still don’t think I understand Wilsonian effective field theory. If I’m right though, this seems genuinely weird, and something more of the public should appreciate.

Congratulations to Simon Caron-Huot and Pedro Vieira for the New Horizons Prize!

The 2020 Breakthrough Prizes were announced last week, awards in physics, mathematics, and life sciences. The physics prize was awarded to the Event Horizon Telescope, with the $3 million award to be split among the 347 members of the collaboration. The Breakthrough Prize Foundation also announced this year’s New Horizons prizes, six smaller awards of $100,000 each to younger researchers in physics and math. One of those awards went to two people I know, Simon Caron-Huot and Pedro Vieira. Extremely specialized as I am, I hope no-one minds if I ignore all the other awards and talk about them.

The award for Caron-Huot and Vieira is “For profound contributions to the understanding of quantum field theory.” Indeed, both Simon and Pedro have built their reputations as explorers of quantum field theories, the kind of theories we use in particle physics. Both have found surprising behavior in these theories, where a theory people thought they understood did something quite unexpected. Both also developed new calculation methods, using these theories to compute things that were thought to be out of reach. But this is all rather vague, so let me be a bit more specific about each of them:

Simon Caron-Huot is known for his penetrating and mysterious insight. He has the ability to take a problem and think about it in a totally original way, coming up with a solution that no-one else could have thought of. When I first worked with him, he took a calculation that the rest of us would have taken a month to do and did it by himself in a week. His insight seems to come in part from familiarity with the physics literature, forgotten papers from the 60’s and 70’s that turn out surprisingly useful today. Largely, though, his insight is his own, an inimitable style that few can anticipate. His interests are broad, from exotic toy models to well-tested theories that describe the real world, covering a wide range of methods and approaches. Physicists tend to describe each other in terms of standard “virtues”: depth and breadth, knowledge and originality. Simon somehow seems to embody all of them.

Pedro Vieira is mostly known for his work with integrable theories. These are theories where if one knows the right trick one can “solve” the theory exactly, rather than using the approximations that physicists often rely on. Pedro was a mentor to me when I was a postdoc at the Perimeter Institute, and one thing he taught me was to always expect more. When calculating with computer code I would wait hours for a result, while Pedro would ask “why should it take hours?”, and if we couldn’t propose a reason would insist we find a quicker way. This attitude paid off in his research, where he has used integrable theories to calculate things others would have thought out of reach. His Pentagon Operator Product Expansion, or “POPE”, uses these tricks to calculate probabilities that particles collide, and more recently he pushed further to other calculations with a hexagon-based approach (which one might call the “HOPE”). Now he’s working on “bootstrapping” up complicated theories from simple physical principles, once again asking “why should this be hard?”

The Real E=mc^2

It’s the most famous equation in all of physics, written on thousands of chalkboard stock photos. Part of its charm is its simplicity: E for energy, m for mass, c for the speed of light, just a few simple symbols in a one-line equation. Despite its simplicity, E=mc^2 is deep and important enough that there are books dedicated to explaining it.

What does E=mc^2 mean?

Some will tell you it means mass can be converted to energy, enabling nuclear power and the atomic bomb. This is a useful picture for chemists, who like to think about balancing ingredients: this much mass on one side, this much energy on the other. It’s not the best picture for physicists, though. It makes it sound like energy is some form of “stuff” you can pour into your chemistry set flask, and energy really isn’t like that.

There’s another story you might have heard, in older books. In that story, E=mc^2 tells you that in relativity mass, like distance and time, is relative. The more energy you have, the more mass you have. Those books will tell you that this is why you can’t go faster than light: the faster you go, the greater your mass, and the harder it is to speed up.

Modern physicists don’t talk about it that way. In fact, we don’t even write E=mc^2 that way. We’re more likely to write:

E=\frac{mc^2}{\sqrt{1-\frac{v^2}{c^2}}}

“v” here stands for the velocity, how fast the mass is moving. The faster the mass moves, the more energy it has. Take v to zero, and you get back the familiar E=mc^2.

The older books weren’t lying to you, but they were thinking about a different notion of mass: “relativistic mass” m_r instead of “rest mass” $m_0$, related like this:

m_r=\frac{m_0}{\sqrt{1-\frac{v^2}{c^2}}}

which explains the difference in how we write E=mc^2.

Why the change? In part, it’s because of particle physics. In particle physics, we care about the rest mass of particles. Different particles have different rest mass: each electron has one rest mass, each top quark has another, regardless of how fast they’re going. They still get more energy, and harder to speed up, the faster they go, but we don’t describe it as a change in mass. Our equations match the old books, we just talk about them differently.

Of course, you can dig deeper, and things get stranger. You might hear that mass does change with energy, but in a very different way. You might hear that mass is energy, that they’re just two perspectives on the same thing. But those are stories for another day.

I titled this post “The Real E=mc^2”, but to clarify, none of these explanations are more “real” than the others. They’re words, useful in different situations and for different people. “The Real E=mc^2” isn’t the E=mc^2 of nuclear chemists, or old books, or modern physicists. It’s the theory itself, the mathematical rules and principles that all the rest are just trying to describe.

Nonperturbative Methods for Conformal Theories in Natal

I’m at a conference this week, on Nonperturbative Methods for Conformal Theories, in Natal on the northern coast of Brazil.

Where even the physics institutes have their own little rainforests.

“Nonperturbative” means that most of the people at this conference don’t use the loop-by-loop approximation of Feynman diagrams. Instead, they try to calculate things that don’t require approximations, finding formulas that work even for theories where the forces involved are very strong. In practice this works best in what are called “conformal” theories, roughly speaking these are theories that look the same whichever “scale” you use. Sometimes these theories are “integrable”, theories that can be “solved” exactly with no approximation. Sometimes these theories can be “bootstrapped”, starting with a guess and seeing how various principles of physics constrain it, mapping out a kind of “space of allowed theories”. Both approaches, integrability and bootstrap, are present at this conference.

This isn’t quite my community, but there’s a fair bit of overlap. We care about many of the same theories, like N=4 super Yang-Mills. We care about tricks to do integrals better, or to constrain mathematical guesses better, and we can trade these kinds of tricks and give each other advice. And while my work is typically “perturbative”, I did have one nonperturbative result to talk about, one which turns out to be more closely related to the methods these folks use than I had appreciated.

The Particle Physics Curse of Knowledge

There’s a debate raging right now in particle physics, about whether and how to build the next big collider. CERN’s Future Circular Collider group has been studying different options, some more expensive and some less (Peter Woit has a nice summary of these here). This year, the European particle physics community will debate these proposals, deciding whether to include them in an updated European Strategy for Particle Physics. After that, it will be up to the various countries that are members of CERN to decide whether to fund the proposal. With the costs of the more expensive options hovering around $20 billion, this has led to substantial controversy.

I’m not going to offer an opinion here one way or another. Weighing this kind of thing requires knowing the alternatives: what else the European particle physics community might lobby for in the next few years, and once they decide, what other budget priorities each individual country has. I know almost nothing about either.

Instead of an opinion, I have an observation:

Imagine that primatologists had proposed a $20 billion primate center, able to observe gorillas in greater detail than ever before. The proposal might be criticized in any number of ways: there could be much cheaper ways to accomplish the same thing, the project might fail, it might be that we simply don’t care enough about primate behavior to spend $20 billion on it.

What you wouldn’t expect is the claim that a $20 billion primate center would teach us nothing new.

It probably wouldn’t teach us “$20 billion worth of science”, whatever that means. But a center like that would be guaranteed to discover something. That’s because we don’t expect primatologists’ theories to be exact. Even if gorillas behaved roughly as primatologists expected, the center would still see new behaviors, just as a consequence of looking at a new level of detail.

To pick a physics example, consider the gravitational wave telescope LIGO. Before their 2016 observation of two black holes merging, LIGO faced substantial criticism. After their initial experiments didn’t detect anything, many physicists thought that the project was doomed to fail: that it would never be sensitive enough to detect the faint signals of gravitational waves past the messy vibrations of everyday life on Earth.

When it finally worked, though, LIGO did teach us something new. Not the existence of gravitational waves, we already knew about them. Rather, LIGO taught us new things about the kinds of black holes that exist. LIGO observed much bigger black holes than astronomers expected, a surprise big enough that it left some people skeptical. Even if it hadn’t, though, we still would almost certainly observe something new: there’s no reason to expect astronomers to perfectly predict the size of the universe’s black holes.

Particle physics is different.

I don’t want to dismiss the work that goes in to collider physics (far too many people have dismissed it recently). Much, perhaps most, of the work on the LHC is dedicated not to detecting new particles, but to confirming and measuring the Standard Model. A new collider would bring heroic scientific effort. We’d learn revolutionary new things about how to build colliders, how to analyze data from colliders, and how to use the Standard Model to make predictions for colliders.

In the end, though, we expect those predictions to work. And not just to work reasonably well, but to work perfectly. While we might see something beyond the Standard Model, the default expectation is that we won’t, that after doing the experiments and analyzing the data and comparing to predictions we’ll get results that are statistically indistinguishable from an equation we can fit on a T-shirt. We’ll fix the constants on that T-shirt to an unprecedented level of precision, yes, but the form of the equation may well stay completely the same.

I don’t think there’s another field where that’s even an option. Nowhere else in all of science could we observe the world in unprecedented detail, capturing phenomena that had never been seen before…and end up perfectly matching our existing theory. There’s no other science where anyone would even expect that to happen.

That makes the argument here different from any argument we’ve faced before. It forces people to consider their deep priorities, to think not just about the best way to carry out this test or that but about what science is supposed to be for. I don’t think there are any easy answers. We’re in what may well be a genuinely new situation, and we have to figure out how to navigate it together.

Postscript: I still don’t want to give an opinion, but given that I didn’t have room for this above let me give a fragment of an opinion: Higgs triple couplings!!!

Made of Quarks Versus Made of Strings

When you learn physics in school, you learn it in terms of building blocks.

First, you learn about atoms. Indivisible elements, as the Greeks foretold…until you learn that they aren’t indivisible. You learn that atoms are made of electrons, protons, and neutrons. Then you learn that protons and neutrons aren’t indivisible either, they’re made of quarks. They’re what physicists call composite particles, particles made of other particles stuck together.

Hearing this story, you notice a pattern. Each time physicists find a more fundamental theory, they find that what they thought were indivisible particles are actually composite. So when you hear physicists talking about the next, more fundamental theory, you might guess it has to work the same way. If quarks are made of, for example, strings, then each quark is made of many strings, right?

Nope! As it turns out, there are two different things physicists can mean when they say a particle is “made of” a more fundamental particle. Sometimes they mean the particle is composite, like the proton is made of quarks. But sometimes, like when they say particles are “made of strings”, they mean something different.

To understand what this “something different” is, let’s go back to quarks for a moment. You might have heard there are six types, or flavors, of quarks: up and down, strange and charm, top and bottom. The different types have different mass and electric charge. You might have also heard that quarks come in different colors, red green and blue. You might wonder then, aren’t there really eighteen types of quark? Red up quarks, green top quarks, and so forth?

Physicists don’t think about it that way. Unlike the different flavors, the different colors of quark have a more unified mathematical description. Changing the color of a quark doesn’t change its mass or electric charge. All it changes is how the quark interacts with other particles via the strong nuclear force. Know how one color works, and you know how the other colors work. Different colors can also “mix” together, similarly to how different situations can mix together in quantum mechanics: just as Schrodinger’s cat can be both alive and dead, a quark can be both red and green.

This same kind of thing is involved in another example, electroweak unification. You might have heard that electromagnetism and the weak nuclear force are secretly the same thing. Each force has corresponding particles: the familiar photon for electromagnetism, and W and Z bosons for the weak nuclear force. Unlike the different colors of quarks, photons and W and Z bosons have different masses from each other. It turns out, though, that they still come from a unified mathematical description: they’re “mixtures” (in the same Schrodinger’s cat-esque sense) of the particles from two more fundamental forces (sometimes called “weak isospin” and “weak hypercharge”). The reason they have different masses isn’t their own fault, but the fault of the Higgs: the Higgs field we have in our universe interacts with different parts of this unified force differently, so the corresponding particles end up with different masses.

A physicist might say that electromagnetism and the weak force are “made of” weak isospin and weak hypercharge. And it’s that kind of thing that physicists mean when they say that quarks might be made of strings, or the like: not that quarks are composite, but that quarks and other particles might have a unified mathematical description, and look different only because they’re interacting differently with something else.

This isn’t to say that quarks and electrons can’t be composite as well. They might be, we don’t know for sure. If they are, the forces binding them together must be very strong, strong enough that our most powerful colliders can’t make them wiggle even a little out of shape. The tricky part is that composite particles get mass from the energy holding them together. A particle held together by very powerful forces would normally be very massive, if you want it to end up lighter you have to construct your theory carefully to do that. So while occasionally people will suggest theories where quarks or electrons are composite, these theories aren’t common. Most of the time, if a physicist says that quarks or electrons are “made of ” something else, they mean something more like “particles are made of strings” than like “protons are made of quarks”.