Tag Archives: particle physics

Zero-Point Energy, Zero-Point Diagrams

Listen to a certain flavor of crackpot, or a certain kind of science fiction, and you’ll hear about zero-point energy. Limitless free energy drawn from quantum space-time itself, zero-point energy probably sounds like bullshit. Often it is. But lurking behind the pseudoscience and the fiction is a real physics concept, albeit one that doesn’t really work like those people imagine.

In quantum mechanics, the zero-point energy is the lowest energy a particular system can have. That number doesn’t actually have to be zero, even for empty space. People sometimes describe this in terms of so-called virtual particles, popping up from nothing in particle-antiparticle pairs only to annihilate each other again, contributing energy in the absence of any “real particles”. There’s a real force, the Casimir effect, that gets attributed to this, a force that pulls two metal plates together even with no charge or extra electromagnetic field. The same bubbling of pairs of virtual particles also gets used to explain the Hawking radiation of black holes.

I’d like to try explaining all of these things in a different way, one that might clear up some common misconceptions. To start, let’s talk about, not zero-point energy, but zero-point diagrams.

Feynman diagrams are a tool we use to study particle physics. We start with a question: if some specific particles come together and interact, what’s the chance that some (perhaps different) particles emerge? We start by drawing lines representing the particles going in and out, then connect them in every way allowed by our theory. Finally we translate the diagrams to numbers, to get an estimate for the probability. In particle physics slang, the number of “points” is the total number of particles: particles in, plus particles out. For example, let’s say we want to know the chance that two electrons go in and two electrons come out. That gives us a “four-point” diagram: two in, plus two out. A zero-point diagram, then, means zero particles in, zero particles out.

A four-point diagram and a zero-point diagram

(Note that this isn’t why zero-point energy is called zero-point energy, as far as I can tell. Zero-point energy is an older term from before Feynman diagrams.)

Remember, each Feynman diagram answers a specific question, about the chance of particles behaving in a certain way. You might wonder, what question does a zero-point diagram answer? The chance that nothing goes to nothing? Why would you want to know that?

To answer, I’d like to bring up some friends of mine, who do something that might sound equally strange: they calculate one-point diagrams, one particle goes to none. This isn’t strange for them because they study theories with defects.

For some reason, they didn’t like my suggestion to use this stamp on their papers

Normally in particle physics, we think about our particles in an empty, featureless space. We don’t have to, though. One thing we can do is introduce features in this space, like walls and mirrors, and try to see what effect they have. We call these features “defects”.

If there’s a defect like that, then it makes sense to calculate a one-point diagram, because your one particle can interact with something that’s not a particle: it can interact with the defect.

A one-point diagram with a wall, or “defect”

You might see where this is going: let’s say you think there’s a force between two walls, that comes from quantum mechanics, and you want to calculate it. You could imagine it involves a diagram like this:

A “zero-point diagram” between two walls

Roughly speaking, this is the kind of thing you could use to calculate the Casimir effect, that mysterious quantum force between metal plates. And indeed, it involves a zero-point diagram.

Here’s the thing, though: metal plates aren’t just “defects”. They’re real physical objects, made of real physical particles. So while you can think of the Casimir effect with a “zero-point diagram” like that, you can also think of it with a normal diagram, more like the four-point diagram I showed you earlier: one that computes, not a force between defects, but a force between the actual electrons and protons that make up the two plates.

A lot of the time when physicists talk about pairs of virtual particles popping up out of the vacuum, they have in mind a picture like this. And often, you can do the same trick, and think about it instead as interactions between physical particles. There’s a story of roughly this kind for Hawking radiation: you can think of a black hole event horizon as “cutting in half” a zero-point diagram, and see pairs of particles going out from the black hole…but you can also do a calculation that looks more like particles interacting with a gravitational field.

This also might help you understand why, contra the crackpots and science fiction writers, zero-point energy isn’t a source of unlimited free energy. Yes, a force like the Casimir effect comes “from the vacuum” in some sense. But really, it’s a force between two particles. And just like the gravitational force between two particles, this doesn’t give you unlimited free power. You have to do the work to move the particles back over and over again, using the same amount of power you gained from the force to begin with. And unlike the forces you’re used to, these are typically very small effects, as usual for something that depends on quantum mechanics. So it’s even less useful than more everyday forces for this.

Why do so many crackpots and authors expect zero-point energy to be a massive source of power? In part, this is due to mistakes physicists made early on.

Sometimes, when calculating a zero-point diagram (or any other diagram), we don’t get a sensible number. Instead, we get infinity. Physicists used to be baffled by this. Later, they understood the situation a bit better, and realized that those infinities were probably just due to our ignorance. We don’t know the ultimate high-energy theory, so it’s possible something happens at high energies to cancel those pesky infinities. Without knowing exactly what happened, physicists would estimate by using a “cutoff” energy where they expected things to change.

That kind of calculation led to an estimate you might have heard of, that the zero-point energy inside single light bulb could boil all the world’s oceans. That estimate gives a pretty impressive mental image…but it’s also wrong.

This kind of estimate led to “the worst theoretical prediction in the history of physics”, that the cosmological constant, the force that speeds up the expansion of the universe, is 120 orders of magnitude higher than its actual value (if it isn’t just zero). If there really were energy enough inside each light bulb to boil the world’s oceans, the expansion of the universe would be quite different than what we observe.

At this point, it’s pretty clear there is something wrong with these kinds of “cutoff” estimates. The only unclear part is whether that’s due to something subtle or something obvious. But either way, this particular estimate is just wrong, and you shouldn’t take it seriously. Zero-point energy exists, but it isn’t the magical untapped free energy you hear about in stories. It’s tiny quantum corrections to the forces between particles.

Particles vs Waves, Particles vs Strings

On my “Who Am I?” page, I open with my background, calling myself a string theorist, then clarify: “in practice I’m more of a Particle Theorist, describing the world not in terms of short lengths of string but rather with particles that each occupy a single point in space”.

When I wrote that I didn’t think it would confuse people. Now that I’m older and wiser, I know people can be confused in a variety of ways. And since I recently saw someone confused about this particular phrase (yes I’m vagueblogging, but I suspect you’re reading this and know who you are 😉 ), I figured I’d explain it.

If you’ve learned a few things about quantum mechanics, maybe you have this slogan in mind:

“What we used to think of as particles are really waves. They spread out over an area, with peaks and troughs that interfere, and you never know exactly where you will measure them.”

With that in mind, my talk of “particles that each occupy a single point” doesn’t make sense. Doesn’t the slogan mean that particles don’t exist?

Here’s the thing: that’s the wrong slogan. The right slogan is just a bit different:

“What we used to think of as particles are ALSO waves. They spread out over an area, with peaks and troughs that interfere, and you never know exactly where you will measure them.”

The principle you were remembering is often called “wave-particle duality“. That doesn’t mean “particles don’t exist”. It means “waves and particles are the same thing”.

This matters, because just as wave-like properties are important, particle-like properties are important. And while it’s true that you can never know exactly where you will measure a particle, it’s also true that it’s useful, and even necessary, to think of it as occupying a single point.

That’s because particles can only affect each other when they’re at the same point. Physicists call this the principle of locality, the idea that there is no real “action at a distance”, everything happens because of something traveling from point A to point B. Wave-particle duality doesn’t change that, it just makes the specific point uncertain. It means you have to add up over every specific point where the particles could have interacted, but each term in your sum has to still involve a specific point: quantum mechanics doesn’t let particles affect each other non-locally.

Strings, in turn, are a little bit different. Strings have length, particles don’t. Particles interact at a point, strings can interact anywhere along the string. Strings introduce a teeny bit of non-locality.

When you compare particles and waves, you’re thinking pre-quantum mechanics, two classical things neither of which is the full picture. When you compare particles and strings, both are quantum, both are also waves. But in a meaningful sense one occupies a single point, and the other doesn’t.

Unification That Does Something

I’ve got unification on the brain.

Recently, a commenter asked me what physicists mean when they say two forces unify. While typing up a response, I came across this passage, in a science fiction short story by Ted Chiang.

Physics admits of a lovely unification, not just at the level of fundamental forces, but when considering its extent and implications. Classifications like ‘optics’ or ‘thermodynamics’ are just straitjackets, preventing physicists from seeing countless intersections.

This passage sounds nice enough, but I feel like there’s a misunderstanding behind it. When physicists seek after unification, we’re talking about something quite specific. It’s not merely a matter of two topics intersecting, or describing them with the same math. We already plumb intersections between fields, including optics and thermodynamics. When we hope to find a unified theory, we do so because it does something. A real unified theory doesn’t just aid our calculations, it gives us new ways to alter the world.

To show you what I mean, let me start with something physicists already know: electroweak unification.

There’s a nice series of posts on the old Quantum Diaries blog that explains electroweak unification in detail. I’ll be a bit vaguer here.

You might have heard of four fundamental forces: gravity, electromagnetism, the strong nuclear force, and the weak nuclear force. You might have also heard that two of these forces are unified: the electromagnetic force and the weak nuclear force form something called the electroweak force.

What does it mean that these forces are unified? How does it work?

Zoom in far enough, and you don’t see the electromagnetic force and the weak force anymore. Instead you see two different forces, I’ll call them “W” and “B”. You’ll also see the Higgs field. And crucially, you’ll see the “W” and “B” forces interact with the Higgs.

The Higgs field is special because it has what’s called a “vacuum” value. Even in otherwise empty space, there’s some amount of “Higgsness” in the background, like the color of a piece of construction paper. This background Higgs-ness is in some sense an accident, just one stable way the universe happens to sit. In particular, it picks out an arbitrary kind of direction: parts of the “W” and “B” forces happen to interact with it, and parts don’t.

Now let’s zoom back out. We could, if we wanted, keep our eyes on the “W” and “B” forces. But that gets increasingly silly. As we zoom out we won’t be able to see the Higgs field anymore. Instead, we’ll just see different parts of the “W” and “B” behaving in drastically different ways, depending on whether or not they interact with the Higgs. It will make more sense to talk about mixes of the “W” and “B” fields, to distinguish the parts that are “lined up” with the background Higgs and the parts that aren’t. It’s like using “aft” and “starboard” on a boat. You could use “north” and “south”, but that would get confusing pretty fast.

My cabin is on the west side of the ship…unless we’re sailing east….

What are those “mixes” of the “W” and “B” forces? Why, they’re the weak nuclear force and the electromagnetic force!

This, broadly speaking, is the kind of unification physicists look for. It doesn’t have to be a “mix” of two different forces: most of the models physicists imagine start with a single force. But the basic ideas are the same: that if you “zoom in” enough you see a simpler model, but that model is interacting with something that “by accident” picks a particular direction, so that as we zoom out different parts of the model behave in different ways. In that way, you could get from a single force to all the different forces we observe.

That “by accident” is important here, because that accident can be changed. That’s why I said earlier that real unification lets us alter the world.

To be clear, we can’t change the background Higgs field with current technology. The biggest collider we have can just make a tiny, temporary fluctuation (that’s what the Higgs boson is). But one implication of electroweak unification is that, with enough technology, we could. Because those two forces are unified, and because that unification is physical, with a physical cause, it’s possible to alter that cause, to change the mix and change the balance. This is why this kind of unification is such a big deal, why it’s not the sort of thing you can just chalk up to “interpretation” and ignore: when two forces are unified in this way, it lets us do new things.

Mathematical unification is valuable. It’s great when we can look at different things and describe them in the same language, or use ideas from one to understand the other. But it’s not the same thing as physical unification. When two forces really unify, it’s an undeniable physical fact about the world. When two forces unify, it does something.

Formal Theory and Simulated Experiment

There are two kinds of theoretical physicists. Some, called phenomenologists, make predictions about the real world. Others, the so-called “formal theorists”, don’t. They work with the same kinds of theories as the phenomenologists, quantum field theories of the sort that have been so successful in understanding the subatomic world. But the specific theories they use are different: usually, toy models that aren’t intended to describe reality.

Most people get this is valuable. It’s useful to study toy models, because they help us tackle the real world. But they stumble on another point. Sure, they say, you can study toy models…but then you should call yourself a mathematician, not a physicist.

I’m a “formal theorist”. And I’m very much not a mathematician, I’m definitely a physicist. Let me explain why, with an analogy.

As an undergrad, I spent some time working in a particle physics lab. The lab had developed a new particle detector chip, designed for a future experiment: the International Linear Collider. It was my job to test this chip.

Naturally, I couldn’t test the chip by flinging particles at it. For one, the collider it was designed for hadn’t been built yet! Instead, I had to use simulated input: send in electrical signals that mimicked the expected particles, and see what happens. In effect, I was using a kind of toy model, as a way to understand better how the chip worked.

I hope you agree that this kind of work counts as physics. It isn’t “just engineering” to feed simulated input into a chip. Not when the whole point of that chip is to go into a physics experiment. This kind of work is a large chunk of what an experimental physicist does.

As a formal theorist, my work with toy models is an important part of what a theoretical physicist does. I test out the “devices” of theoretical physics, the quantum-field-theoretic machinery that we use to investigate the world. Without that kind of careful testing on toy models, we’d have fewer tools to work with when we want to understand reality.

Ok, but you might object: an experimental physicist does eventually build the real experiment. They don’t just spend their career on simulated input. If someone only works on formal theory, shouldn’t that at least make them a mathematician, not a physicist?

Here’s the thing, though: after those summers in that lab, I didn’t end up as an experimental physicist. After working on that chip, I didn’t go on to perfect it for the International Linear Collider. But it would be rather bizarre if that, retroactively, made my work in that time “engineering” and not “physics”.

Oh, I should also mention that the International Linear Collider might not ever be built. So, there’s that.

Formal theory is part of physics because it cares directly about the goals of physics: understanding the real world. It is just one step towards that goal, it doesn’t address the real world alone. But neither do the people testing out chips for future colliders. Formal theory isn’t always useful, similarly, planned experiments don’t always get built. That doesn’t mean it’s not physics.

The Parameter Was Inside You All Along

Sabine Hossenfelder had an explainer video recently on how to tell science from pseudoscience. This is a famously difficult problem, so naturally we have different opinions. I actually think the picture she draws is reasonably sound. But while it is a good criterion to tell whether you yourself are doing pseudoscience, it’s surprisingly tricky to apply it to other people.

Hossenfelder argues that science, at its core, is about explaining observations. To tell whether something is science or pseudoscience you need to ask, first, if it agrees with observations, and second, if it is simpler than those observations. In particular, a scientist should prefer models with fewer parameters. If your model has so many parameters that you can fit any observation, you’re not being scientific.

This is a great rule of thumb, one that as Hossenfelder points out forms the basis of a whole raft of statistical techniques. It does rely on one tricky judgement, though: how many parameters does your model actually have?

Suppose I’m one of those wacky theorists who propose a whole new particle to explain some astronomical mystery. Hossenfelder, being more conservative in these things, proposes a model with no new particles. Neither of our models fit the data perfectly. Perhaps my model fits a little better, but after all it has one extra parameter, from the new particle. If we want to compare our models, we should take that into account, and penalize mine.

Here’s the question, though: how do I know that Hossenfelder didn’t start out with more particles, and got rid of them to get a better fit? If she did, she had more parameters than I did. She just fit them away.

The problem here is closely related to one called the look-elsewhere effect. Scientists don’t publish everything they try. An unscrupulous scientist can do a bunch of different tests until one of them randomly works, and just publish that one, making the result look meaningful when really it was just random chance. Even if no individual scientist is unscrupulous, a community can do the same thing: many scientists testing many different models, until one accidentally appears to work.

As a scientist, you mostly know if your motivations are genuine. You know if you actually tried a bunch of different models or had good reasons from the start to pick the one you did. As someone judging other scientists, you often don’t have that luxury. Sometimes you can look at prior publications and see all the other attempts someone made. Sometimes they’ll even tell you explicitly what parameters they used and how they fit them. But sometimes, someone will swear up and down that their model is just the most natural, principled choice they could have made, and they never considered anything else. When that happens, how do we guard against the look-elsewhere effect?

The normal way to deal with the look-elsewhere effect is to consider, not just whatever tests the scientist claims to have done, but all tests they could reasonably have done. You need to count all the parameters, not just the ones they say they varied.

This works in some fields. If you have an idea of what’s reasonable and what’s not, you have a relatively manageable list of things to look at. You can come up with clear rules for which theories are simpler than others, and people will agree on them.

Physics doesn’t have it so easy. We don’t have any pre-set rules for what kind of model is “reasonable”. If we want to parametrize every “reasonable” model, the best we can do are what are called Effective Field Theories, theories which try to describe every possible type of new physics in terms of its effect on the particles we already know. Even there, though, we need assumptions. The most popular effective field theory, called SMEFT, assumes the forces of the Standard Model keep their known symmetries. You get a different model if you relax that assumption, and even that model isn’t the most general: for example, it still keeps relativity intact. Try to make the most general model possible, and you end up waist-deep in parameter soup.

Subjectivity is a dirty word in science…but as far as I can tell it’s the only way out of this. We can try to count parameters when we can, and use statistical tools…but at the end of the day, we still need to make choices. We need to judge what counts as an extra parameter and what doesn’t, which possible models to compare to and which to ignore. That’s going to be dependent on our scientific culture, on fashion and aesthetics, there just isn’t a way around that. The best we can do is own up to our assumptions, and be ready to change them when we need to.

How the Higgs Is, and Is Not, Like an Eel

In the past, what did we know about eel reproduction? What do we know now?

The answer to both questions is, surprisingly little! For those who don’t know the story, I recommend this New Yorker article. Eels turn out to have a quite complicated life cycle, and can only reproduce in the very last stage. Different kinds of eels from all over Europe and the Americas spawn in just one place: the Sargasso Sea. But while researchers have been able to find newborn eels in those waters, and more recently track a few mature adults on their migration back, no-one has yet observed an eel in the act. Biologists may be able to infer quite a bit, but with no direct evidence yet the truth may be even more surprising than they expect. The details of eel reproduction are an ongoing mystery, the “eel question” one of the field’s most enduring.

But of course this isn’t an eel blog. I’m here to answer a different question.

In the past, what did we know about the Higgs boson? What do we know now?

Ask some physicists, and they’ll say that even before the LHC everyone knew the Higgs existed. While this isn’t quite true, it is certainly true that something like the Higgs boson had to exist. Observations of other particles, the W and Z bosons in particular, gave good evidence for some kind of “Higgs mechanism”, that gives other particles mass in a “Higgs-like-way”. A Higgs boson was in some sense the simplest option, but there could have been more than one, or a different sort of process instead. Some of these alternatives may have been sensible, others as silly as believing that eels come from horses’ tails. Until 2012, when the Higgs boson was observed, we really didn’t know.

We also didn’t know one other piece of information: the Higgs boson’s mass. That tells us, among other things, how much energy we need to make one. Physicists were pretty sure the LHC was capable of producing a Higgs boson, but they weren’t sure where or how they’d find it, or how much energy would ultimately be involved.

Now thanks to the LHC, we know the mass of the Higgs boson, and we can rule out some of the “alternative” theories. But there’s still quite a bit we haven’t observed. In particular, we haven’t observed many of the Higgs boson’s couplings.

The couplings of a quantum field are how it interacts, both with other quantum fields and with itself. In the case of the Higgs, interacting with other particles gives those particles mass, while interacting with itself is how it itself gains mass. Since we know the masses of these particles, we can infer what these couplings should be, at least for the simplest model. But, like the eels, the truth may yet surprise us. Nothing guarantees that the simplest model is the right one: what we call simplicity is a judgement based on aesthetics, on how we happen to write models down. Nature may well choose differently. All we can honestly do is parametrize our ignorance.

In the case of the eels, each failure to observe their reproduction deepens the mystery. What are they doing that is so elusive, so impossible to discover? In this, eels are different from the Higgs boson. We know why we haven’t observed the Higgs boson coupling to itself, at least according to our simplest models: we’d need a higher-energy collider, more powerful than the LHC, to see it. That’s an expensive proposition, much more expensive than using satellites to follow eels around the ocean. Because our failure to observe the Higgs self-coupling is itself no mystery, our simplest models could still be correct: as theorists, we probably have it easier than the biologists. But if we want to verify our models in the real world, we have it much harder.

Bottomless Science

There’s an attitude I keep seeing among physics crackpots. It goes a little something like this:

“Once upon a time, physics had rules. You couldn’t just wave your hands and write down math, you had to explain the world with real physical things.”

What those “real physical things” were varies. Some miss the days when we explained things mechanically, particles like little round spheres clacking against each other. Some want to bring back absolutes: an absolute space, an absolute time, an absolute determinism. Some don’t like the proliferation of new particles, and yearn for the days when everything was just electrons, protons, and neutrons.

In each case, there’s a sense that physicists “cheated”. That, faced with something they couldn’t actually explain, they made up new types of things (fields, relativity, quantum mechanics, antimatter…) instead. That way they could pretend to understand the world, while giving up on their real job, explaining it “the right way”.

I get where this attitude comes from. It does make a certain amount of sense…for other fields.

An an economist, you can propose whatever mathematical models you want, but at the end of the day they have to boil down to actions taken by people. An economist who proposed some sort of “dark money” that snuck into the economy without any human intervention would get laughed at. Similarly, as a biologist or a chemist, you ultimately need a description that makes sense in terms of atoms and molecules. Your description doesn’t actually need to be in terms of atoms and molecules, and often it can’t be: you’re concerned with a different level of explanation. But it should be possible in terms of atoms and molecules, and that puts some constraints on what you can propose.

Why shouldn’t physics have similar constraints?

Suppose you had a mandatory bottom level like this. Maybe everything boils down to ball bearings, for example. What happens when you study the ball bearings?

Your ball bearings have to have some properties: their shape, their size, their weight. Where do those properties come from? What explains them? Who studies them?

Any properties your ball bearings have can be studied, or explained, by physics. That’s physics’s job: to study the fundamental properties of matter. Any “bottom level” is just as fit a subject for physics as anything else, and you can’t explain it using itself. You end up needing another level of explanation.

Maybe you’re objecting here that your favorite ball bearings aren’t up for debate: they’re self-evident, demanded by the laws of mathematics or philosophy.

Here for lack of space, I’ll only say that mathematics and philosophy don’t work that way. Mathematics can tell you whether you’ve described the world consistently, whether the conclusions you draw from your assumptions actually follow. Philosophy can see if you’re asking the right questions, if you really know what you think you know. Both have lessons for modern physics, and you can draw valid criticisms from either. But neither one gives you a single clear way the world must be. Not since the days of Descartes and Kant have people been that naive.

Because of this, physics is doing something a bit different from economics and biology. Each field wants to make models, wants to describe its observations. But in physics, ultimately, those models are all we have. We don’t have a “bottom level”, a backstop where everything has to make sense. That doesn’t mean we can just make stuff up, and whenever possible we understand the world in terms of physics we’ve already discovered. But when we can’t, all bets are off.

4gravitons, Spinning Up

I had a new paper out last week, with Michèle Levi and Andrew McLeod. But to explain it, I’ll need to clarify something about our last paper.

Two weeks ago, I told you that Andrew and Michèle and I had written a paper, predicting what gravitational wave telescopes like LIGO see when black holes collide. You may remember that LIGO doesn’t just see colliding black holes: it sees colliding neutron stars too. So why didn’t we predict what happens when neutron stars collide?

Actually, we did. Our calculation doesn’t just apply to black holes. It applies to neutron stars too. And not just neutron stars: it applies to anything of roughly the right size and shape. Black holes, neutron stars, very large grapefruits…

LIGO’s next big discovery

That’s the magic of Effective Field Theory, the “zoom lens” of particle physics. Zoom out far enough, and any big, round object starts looking like a particle. Black holes, neutron stars, grapefruits, we can describe them all using the same math.

Ok, so we can describe both black holes and neutron stars. Can we tell the difference between them?

In our last calculation, no. In this one, yes!

Effective Field Theory isn’t just a zoom lens, it’s a controlled approximation. That means that when we “zoom out” we don’t just throw out anything “too small to see”. Instead, we approximate it, estimating how big of an effect it can have. Depending on how precise we want to be, we can include more and more of these approximated effects. If our estimates are good, we’ll include everything that matters, and get a good approximation for what we’re trying to observe.

At the precision of our last calculation, a black hole and a neutron star still look exactly the same. Our new calculation aims for a bit higher precision though. (For the experts: we’re at a higher order in spin.) The higher precision means that we can actually see the difference: our result changes for two colliding black holes versus two colliding grapefruits.

So does that mean I can tell you what happens when two neutron stars collide, according to our calculation? Actually, no. That’s not because we screwed up the calculation: it’s because some of the properties of neutron stars are unknown.

The Effective Field Theory of neutron stars has what we call “free parameters”, unknown variables. People have tried to estimate some of these (called “Love numbers” after the mathematician A. E. H. Love), but they depend on the details of how neutron stars work: what stuff they contain, how that stuff is shaped, and how it can move. To find them out, we probably can’t just calculate: we’ll have to measure, observe an actual neutron star collision and see what the numbers actually are.

That’s one of the purposes of gravitational wave telescopes. It’s not (as far as I know) something LIGO can measure. But future telescopes, with more precision, should be able to. By watching two colliding neutron stars and comparing to a high-precision calculation, physicists will better understand what those neutron stars are made of. In order to do that, they will need someone to do that high-precision calculation. And that’s why people like me are involved.

4gravitons Exchanges a Graviton

I had a new paper up last Friday with Michèle Levi and Andrew McLeod, on a topic I hadn’t worked on before: colliding black holes.

I am an “amplitudeologist”. I work on particle physics calculations, computing “scattering amplitudes” to find the probability that fundamental particles bounce off each other. This sounds like the farthest thing possible from black holes. Nevertheless, the two are tightly linked, through the magic of something called Effective Field Theory.

Effective Field Theory is a kind of “zoom knob” for particle physics. You “zoom out” to some chosen scale, and write down a theory that describes physics at that scale. Your theory won’t be a complete description: you’re ignoring everything that’s “too small to see”. It will, however, be an effective description: one that, at the scale you’re interested in, is effectively true.

Particle physicists usually use Effective Field Theory to go between different theories of particle physics, to zoom out from strings to quarks to protons and neutrons. But you can zoom out even further, all the way out to astronomical distances. Zoom out far enough, and even something as massive as a black hole looks like just another particle.

Just click the “zoom X10” button fifteen times, and you’re there!

In this picture, the force of gravity between black holes looks like particles (specifically, gravitons) going back and forth. With this picture, physicists can calculate what happens when two black holes collide with each other, making predictions that can be checked with new gravitational wave telescopes like LIGO.

Researchers have pushed this technique quite far. As the calculations get more and more precise (more and more “loops”), they have gotten more and more challenging. This is particularly true when the black holes are spinning, an extra wrinkle in the calculation that adds a surprising amount of complexity.

That’s where I came in. I can’t compete with the experts on black holes, but I certainly know a thing or two about complicated particle physics calculations. Amplitudeologists, like Andrew McLeod and me, have a grab-bag of tricks that make these kinds of calculations a lot easier. With Michèle Levi’s expertise working with spinning black holes in Effective Field Theory, we were able to combine our knowledge to push beyond the state of the art, to a new level of precision.

This project has been quite exciting for me, for a number of reasons. For one, it’s my first time working with gravitons: despite this blog’s name, I’d never published a paper on gravity before. For another, as my brother quipped when he heard about it, this is by far the most “applied” paper I’ve ever written. I mostly work with a theory called N=4 super Yang-Mills, a toy model we use to develop new techniques. This paper isn’t a toy model: the calculation we did should describe black holes out there in the sky, in the real world. There’s a decent chance someone will use this calculation to compare with actual data, from LIGO or a future telescope. That, in particular, is an absurdly exciting prospect.

Because this was such an applied calculation, it was an opportunity to explore the more applied part of my own field. We ended up using well-known techniques from that corner, but I look forward to doing something more inventive in future.

What I Was Not Saying in My Last Post

Science communication is a gradual process. Anything we say is incomplete, prone to cause misunderstanding. Luckily, we can keep talking, give a new explanation that corrects those misunderstandings. This of course will lead to new misunderstandings. We then explain again, and so on. It sounds fruitless, but in practice our audience nevertheless gets closer and closer to the truth.

Last week, I tried to explain physicists’ notion of a fundamental particle. In particular, I wanted to explain what these particles aren’t: tiny, indestructible spheres, like Democritus imagined. Instead, I emphasized the idea of fields, interacting and exchanging energy, with particles as just the tip of the field iceberg.

I’ve given this kind of explanation before. And when I do, there are two things people often misunderstand. These correspond to two topics which use very similar language, but talk about different things. So this week, I thought I’d get ahead of the game and correct those misunderstandings.

The first misunderstanding: None of that post was quantum.

If you’ve heard physicists explain quantum mechanics, you’ve probably heard about wave-particle duality. Things we thought were waves, like light, also behave like particles, things we thought were particles, like electrons, also behave like waves.

If that’s on your mind, and you see me say particles don’t exist, maybe you think I mean waves exist instead. Maybe when I say “fields”, you think I’m talking about waves. Maybe you think I’m choosing one side of the duality, saying that waves exist and particles don’t.

To be 100% clear: I am not saying that.

Particles and waves, in quantum physics, are both manifestations of fields. Is your field just at one specific point? Then it’s a particle. Is it spread out, with a fixed wavelength and frequency? Then it’s a wave. These are the two concepts connected by wave-particle duality, where the same object can behave differently depending on what you measure. And both of them, to be clear, come from fields. Neither is the kind of thing Democritus imagined.

The second misunderstanding: This isn’t about on-shell vs. off-shell.

Some of you have seen some more “advanced” science popularization. In particular, you might have listened to Nima Arkani-Hamed, of amplituhedron fame, talk about his perspective on particle physics. Nima thinks we need to reformulate particle physics, as much as possible, “on-shell”. “On-shell” means that particles obey their equations of motion, normally quantum calculations involve “off-shell” particles that violate those equations.

To again be clear: I’m not arguing with Nima here.

Nima (and other people in our field) will sometimes talk about on-shell vs off-shell as if it was about particles vs. fields. Normal physicists will write down a general field, and let it be off-shell, we try to do calculations with particles that are on-shell. But once again, on-shell doesn’t mean Democritus-style. We still don’t know what a fully on-shell picture of physics will look like. Chances are it won’t look like the picture of sloshing, omnipresent fields we started with, at least not exactly. But it won’t bring back indivisible, unchangeable atoms. Those are gone, and we have no reason to bring them back.