Category Archives: General QFT

I Ain’t Afraid of No-Ghost Theorems

In honor of Halloween this week, let me say a bit about the spookiest term in physics: ghosts.

In particle physics, we talk about the universe in terms of quantum fields. There is an electron field for electrons, a gluon field for gluons, a Higgs field for Higgs bosons. The simplest fields, for the simplest particles, can be described in terms of just a single number at each point in space and time, a value describing how strong the field is. More complicated fields require more numbers.

Most of the fundamental forces have what we call vector fields. They’re called this because they are often described with vectors, lists of numbers that identify a direction in space and time. But these vectors actually contain too many numbers.

These extra numbers have to be tidied up in some way in order to describe vector fields in the real world, like the electromagnetic field or the gluon field of the strong nuclear force. There are a number of tricks, but the nicest is usually to add some extra particles called ghosts. Ghosts are designed to cancel out the extra numbers in a vector, leaving the right description for a vector field. They’re set up mathematically such that they can never be observed, they’re just a mathematical trick.

Mathematical tricks aren’t all that spooky (unless you’re scared of mathematics itself, anyway). But in physics, ghosts can take on a spookier role as well.

In order to do their job cancelling those numbers, ghosts need to function as a kind of opposite to a normal particle, a sort of undead particle. Normal particles have kinetic energy: as they go faster and faster, they have more and more energy. Said another way, it takes more and more energy to make them go faster. Ghosts have negative kinetic energy: the faster they go, the less energy they have.

If ghosts are just a mathematical trick, that’s fine, they’ll do their job and cancel out what they’re supposed to. But sometimes, physicists accidentally write down a theory where the ghosts aren’t just a trick cancelling something out, but real particles you could detect, without anything to hide them away.

In a theory where ghosts really exist, the universe stops making sense. The universe defaults to the lowest energy it can reach. If making a ghost particle go faster reduces its energy, then the universe will make ghost particles go faster and faster, and make more and more ghost particles, until everything is jam-packed with super-speedy ghosts unto infinity, never-ending because it’s always possible to reduce the energy by adding more ghosts.

The absence of ghosts, then, is a requirement for a sensible theory. People prove theorems showing that their new ideas don’t create ghosts. And if your theory does start seeing ghosts…well, that’s the spookiest omen of all: an omen that your theory is wrong.

Transforming Particles Are Probably Here to Stay

It can be tempting to imagine the world in terms of lego-like building-blocks. Atoms stick together protons, neutrons, and electrons, and protons and neutrons are made of stuck-together quarks in turn. And while atoms, despite the name, aren’t indivisible, you might think that if you look small enough you’ll find indivisible, unchanging pieces, the smallest building-blocks of reality.

Part of that is true. We might, at some point, find the smallest pieces, the things everything else is made of. (In a sense, it’s quite likely we’ve already found them!) But those pieces don’t behave like lego blocks. They aren’t indivisible and unchanging.

Instead, particles, even the most fundamental particles, transform! The most familiar example is beta decay, a radioactive process where a neutron turns into a proton, emitting an electron and a neutrino. This process can be explained in terms of more fundamental particles: the neutron is made of three quarks, and one of those quarks transforms from a “down quark” to an “up quark”. But the explanation, as far as we can tell, doesn’t go any deeper. Quarks aren’t unchanging, they transform.

Beta decay! Ignore the W, which is important but not for this post.

There’s a suggestion I keep hearing, both from curious amateurs and from dedicated crackpots: why doesn’t this mean that quarks have parts? If a down quark can turn into an up dark, an electron, and a neutrino, then why doesn’t that mean that a down quark contains an up quark, an electron, and a neutrino?

The simplest reason is that this isn’t the only way a quark transforms. You can also have beta-plus decay, where an up quark transforms into a down quark, emitting a neutrino and the positively charged anti-particle of the electron, called a positron.

Also, ignore the directions of the arrows, that’s weird particle physics notation that doesn’t matter here.

So to make your idea work, you’d somehow need each down quark to contain an up quark plus some other particles, and each up quark to contain a down quark plus some other particles.

Can you figure out some complicated scheme that works like that? Maybe. But there’s a deeper reason why this is the wrong path.

Transforming particles are part of a broader phenomenon, called particle production. Reactions in particle physics can produce new particles that weren’t there before. This wasn’t part of the earliest theories of quantum mechanics that described one electron at a time. But if you want to consider the quantum properties of not just electrons, but the electric field as well, then you need a more complete theory, called a quantum field theory. And in those theories, you can produce new particles. It’s as simple as turning on the lights: from a wiggling electron, you make light, which in a fully quantum theory is made up of photons. Those photons weren’t “part of” the electron to start with, they are produced by its motion.

If you want to avoid transforming particles, to describe everything in terms of lego-like building-blocks, then you want to avoid particle production altogether. Can you do this in a quantum field theory?

Actually, yes! But your theory won’t describe the whole of the real world.

In physics, we have examples of theories that don’t have particle production. These example theories have a property called integrability. They are theories we can “solve”, doing calculations that aren’t possible in ordinary theories, named after the fact that the oldest such theories in classical mechanics were solved using integrals.

Normal particle physics theories have conserved charges. Beta decay conserves electric charge: you start out with a neutral particle, and end up with one particle with positive charge and another with negative charge. It also conserves other things, like “electron-number” (the electron has electron-number one, the neutrino that comes out with it has electron-number minus one), energy, and momentum.

Integrable theories have those charges too, but they have more. In fact, they have an infinite number of conserved charges. As a result, you can show that in these theories it is impossible to produce new particles. It’s as if each particle’s existence is its own kind of conserved charge, one that can never be created or destroyed, so that each collision just rearranges the particles, never makes new ones.

But while we can write down these theories, we know they can’t describe the whole of the real world. In an integrable theory, when you build things up from the fundamental building-blocks, their energy follows a pattern. Compare the energy of a bunch of different combinations, and you find a characteristic kind of statistical behavior called a Poisson distribution.

Look at the distribution of energies of nuclei of atoms, and you’ll find a very different kind of behavior. It’s called a Wigner-Dyson distribution, and it indicates the opposite of integrability: chaos. Chaos is behavior that can’t be “solved” like integrable theories, behavior that has to be approached by simulations and approximations.

So if you really want there to be un-changing building-blocks, if you think that’s really essential? Then you should probably start looking at integrable theories. But I wouldn’t hold my breath if I were you: the real world seems pretty clearly chaotic, not integrable. And probably, that means particle production is here to stay.

At Quanta This Week, With a Piece on Vacuum Decay

I have a short piece at Quanta Magazine this week, about a physics-y end of the world as we know it called vacuum decay.

For science-minded folks who want to learn a bit more: I have a sentence in the article mentioning other uncertainties. In case you’re curious what those uncertainties are:

Gamma (\gamma) here is the decay rate, its inverse gives the time it takes for a cubic gigaparsec of space to experience vacuum decay. The three uncertainties are from experiments, the uncertainties of our current knowledge of the Higgs mass, top quark mass, and the strength of the strong force.

Occasionally, you see futurology-types mention “uncertainties in the exponent” to argue that some prediction (say, how long it will take till we have human-level AI) is so uncertain that estimates barely even make sense: it might be 10 years, or 1000 years. I find it fun that for vacuum decay, because of that \log_{10}, there is actually uncertainty in the exponent! Vacuum decay might happen in as few as 10^{411} years or as many as 10^{1333} years, and that’s the result of an actual, reasonable calculation!

For physicist readers, I should mention that I got a lot out of reading some slides from a 2016 talk by Matthew Schwartz. Not many details of the calculation made it into the piece, but the slides were helpful in dispelling a few misconceptions that could have gotten into the piece. There’s an instinct to think about the situation in terms of the energy, to think about how difficult it is for quantum uncertainty to get you over the energy barrier to the next vacuum. There are methods that sort of look like that, if you squint, but that’s not really how you do the calculation, and there end up being a lot of interesting subtleties in the actual story. There were also a few numbers that it was tempting to put on the plots in the article, but turn out to be gauge dependent!

Another thing I learned from those slides how far you can actually take the uncertainties mentioned above. The higher-energy Higgs vacuum is pretty dang high-energy, to the point where quantum gravity effects could potentially matter. And at that point, all bets are off. The calculation, with all those nice uncertainties, is a calculation within the framework of the Standard Model. All of the things we don’t yet know about high-energy physics, especially quantum gravity, could freely mess with this. The universe as we know it could still be long-lived, but it could be a lot shorter-lived as well. That in turns makes this calculation a lot more of a practice-ground to hone techniques, rather than an actual estimate you can rely on.

Rube Goldberg Reality

Quantum mechanics is famously unintuitive, but the most intuitive way to think about it is probably the path integral. In the path integral formulation, to find the chance a particle goes from point A to point B, you look at every path you can draw from one place to another. For each path you calculate a complex number, a “weight” for that path. Most of these weights cancel out, leaving the path the particle would travel under classical physics with the biggest contribution. They don’t perfectly cancel out, though, so the other paths still matter. In the end, the way the particle behaves depends on all of these possible paths.

If you’ve heard this story, it might make you feel like you have some intuition for how quantum physics works. With each path getting less likely as it strays from the classical, you might have a picture of a nice orderly set of options, with physicists able to pick out the chance of any given thing happening based on the path.

In a world with just one particle swimming along, this might not be too hard. But our world doesn’t run on the quantum mechanics of individual particles. It runs on quantum field theory. And there, things stop being so intuitive.

First, the paths aren’t “paths”. For particles, you can imagine something in one place, traveling along. But particles are just ripples in quantum fields, which can grow, shrink, or change. For quantum fields instead of quantum particles, the path integral isn’t a sum over paths of a single particle, but a sum over paths traveled by fields. The fields start out in some configuration (which may look like a particle at point A) and then end up in a different configuration (which may look like a particle at point B). You have to add up weights, not for every path a single particle could travel, but every different set of ways the fields could have been in between configuration A and configuration B.

More importantly, though, there is more than one field! Maybe you’ve heard about electric and magnetic fields shifting back and forth in a wave of light, one generating the other. Other fields interact like this, including the fields behind things you might think of as particles like electrons. For any two fields that can affect each other, a disturbance in one can lead to a disturbance in the other. An electromagnetic field can disturb the electron field, which can disturb the Higgs field, and so on.

The path integral formulation tells you that all of these paths matter. Not just the path of one particle or one field chugging along by itself, but the path where the electromagnetic field kicks off a Higgs field disturbance down the line, only to become a disturbance in the electromagnetic field again. Reality is all of these paths at once, a Rube Goldberg machine of a universe.

In such a universe, intuition is a fool’s errand. Mathematics fares a bit better, but is still difficult. While physicists sometimes have shortcuts, most of the time these calculations have to be done piece by piece, breaking the paths down into simpler stories that approximate the true answer.

In the path integral formulation of quantum physics, everything happens at once. And “everything” may be quite a bit larger than you expect.

Gravity-Defying Theories

Universal gravitation was arguably Newton’s greatest discovery. Newton realized that the same laws could describe the orbits of the planets and the fall of objects on Earth, that bodies like the Moon can be fully understood only if you take into account both the Earth and the Sun’s gravity. In a Newtonian world, every mass attracts every other mass in a tiny, but detectable way.

Einstein, in turn, explained why. In Einstein’s general theory of relativity, gravity comes from the shape of space and time. Mass attracts mass, but energy affects gravity as well. Anything that can be measured has a gravitational effect, because the shape of space and time is nothing more than the rules by which we measure distances and times. So gravitation really is universal, and has to be universal.

…except when it isn’t.

It turns out, physicists can write down theories with some odd properties. Including theories where things are, in a certain sense, immune to gravity.

The story started with two mathematicians, Shiing-Shen Chern and Jim Simons. Chern and Simons weren’t trying to say anything in particular about physics. Instead, they cared about classifying different types of mathematical space. They found a formula that, when added up over one of these spaces, counted some interesting properties of that space. A bit more specifically, it told them about the space’s topology: rough details, like the number of holes in a donut, that stay the same even if the space is stretched or compressed. Their formula was called the Chern-Simons Form.

The physicist Albert Schwarz saw this Chern-Simons Form, and realized it could be interpreted another way. He looked at it as a formula describing a quantum field, like the electromagnetic field, describing how the field’s energy varied across space and time. He called the theory describing the field Chern-Simons Theory, and it was one of the first examples of what would come to be known as topological quantum field theories.

In a topological field theory, every question you might want to ask can be answered in a topological way. Write down the chance you observe the fields at particular strengths in particular places, and you’ll find that the answer you get only depends on the topology of the space the fields occupy. The answers are the same if the space is stretched or squished together. That means that nothing you ask depends on the details of how you measure things, that nothing depends on the detailed shape of space and time. Your theory is, in a certain sense, independent of gravity.

Others discovered more theories of this kind. Edward Witten found theories that at first looked like they depend on gravity, but where the gravity secretly “cancels out”, making the theory topological again. It turned out that there were many ways to “twist” string theory to get theories of this kind.

Our world is for the most part not described by a topological theory, gravity matters! (Though it can be a good approximation for describing certain materials.) These theories are most useful, though, in how they allow physicists and mathematicians to work together. Physicists don’t have a fully mathematically rigorous way of defining most of their theories, just a series of approximations and an overall picture that’s supposed to tie them together. For a topological theory, though, that overall picture has a rigorous mathematical meaning: it counts topological properties! As such, topological theories allow mathematicians to prove rigorous results about physical theories. It means they can take a theory of quantum fields or strings that has a particular property that physicists are curious about, and find a version of that property that they can study in fully mathematical rigorous detail. It’s been a boon both to mathematicians interested in topology, and to physicists who want to know more about their theories.

So while you won’t have antigravity boots any time soon, theories that defy gravity are still useful!

The Quantum Paths Not Traveled

Before this week’s post: a former colleague of mine from CEA Paris-Saclay, Sylvain Ribault, posted a dialogue last week presenting different perspectives on academic publishing. One of the highlights of my brief time at the CEA were the times I got to chat with Sylvain and others about the future forms academia might take. He showed me a draft of his dialogue a while ago, designed as a way to introduce newcomers to the debate about how, and whether, academics should do peer review. I’ve got a different topic this week so I won’t say much more about it, but I encourage you to take a look!


Matt Strassler has a nice post up about waves and particles. He’s writing to address a common confusion, between two concepts that sound very similar. On the other hand, there are the waves of quantum field theory, ripples in fundamental fields the smallest versions of which correspond to particles. (Strassler likes to call them “wavicles”, to emphasize their wavy role.) On the other hand, there are the wavefunctions of quantum mechanics, descriptions of the behavior of one or more interacting particles over time. To distinguish, he points out that wavicles can hurt you, while wavefunctions cannot. Wavicles are the things that collide and light up detectors, one by one, wavefunctions are the math that describes when and how that happens. Many types of wavicles can run into each other one by one, but their interactions can all be described together by a single wavefunction. It’s an important point, well stated.

(I do think he goes a bit too far in saying that the wavefunction is not “an object”, though. That smacks of metaphysics, and I think that’s not worth dabbling in for physicists.)

After reading his post, there’s something that might still confuse you. You’ve probably heard that in quantum mechanics, an electron is both a wave and a particle. Does the “wave” in that saying mean “wavicle”, or “wavefunction”?

A “wave” built out of particles

The gif above shows data from a double-slit experiment, an important type of experiment from the early days of quantum mechanics. These experiments were first conducted before quantum field theory (and thus, before the ideas that Strassler summarizes with “wavicles”). In a double-slit experiment, particles are shot at a screen through two slits. The particles that hit the screen can travel through one slit or the other.

A double-slit experiment, in diagram form

Classically, you would expect particles shot randomly at the screen to form two piles on the other side, one in front of each slit. Instead, they bunch up into a rippling pattern, the same sort of pattern that was used a century earlier to argue that light was a wave. The peaks and troughs of the wave pass through both slits, and either line up or cancel out, leaving the distinctive pattern.

When it was discovered that electrons do this too, it led to the idea that electrons must be waves as well, despite also being particles. That insight led to the concept of the wavefunction. So the “wave” in the saying refers to wavefunctions.

But electrons can hurt you, and as Strassler points out, wavefunctions cannot. So how can the electron be a wavefunction?

To risk a bit of metaphysics myself, I’ll just say: it can’t. An electron can’t “be” a wavefunction.

The saying, that electrons are both particles and waves, is from the early days of quantum mechanics, when people were confused about what it all meant. We’re still confused, but we have some better ways to talk about it.

As a start, it’s worth noticing that, whenever you measure an electron, it’s a particle. Each electron that goes through the slits hits your screen as a particle, a single dot. If you see many electrons at once, you may get the feeling that they look like waves. But every actual electron you measure, every time you’re precise enough to notice, looks like a particle. And for each individual electron, you can extrapolate back the path it took, exactly as if it traveled like a particle the whole way through.

The same is true, though, of light! When you see light, photons enter your eyes, and each one that you see triggers a chemical change in a molecule called a photopigment. The same sort of thing happens for photographs, while an electrical signal gets triggered instead in a digital camera. Light may behave like a wave in some sense, but every time you actually observe it it looks like a particle.

But while you can model each individual electron, or photon, as a classical particle, you can’t model the distribution of multiple electrons that way.

That’s because in quantum mechanics, the “paths not taken” matter. A single electron will only go through one slit in the double-slit experiment. But the fact that it could have gone through both slits matters, and changes the chance that it goes through each particular path. The possible paths in the wavefunction interfere with each other, the same way different parts of classical waves do.

That role of the paths not taken, of the “what if”, is the heart and soul of quantum mechanics. No matter how you interpret its mysteries, “what if” matters. If you believe in a quantum multiverse, you think every “what if” happens somewhere in that infinity of worlds. If you think all that matters is observations, then “what if” shows the folly of modeling the world as anything else. If you are tempted to try to mend quantum mechanics with faster-than-light signals, then you have to declare one “what if” the true one. And if you want to double-down on determinism and replace quantum mechanics, you need to declare that certain “what if” questions are off-limits.

“What if matters” isn’t the same as a particle traveling every path at once, it’s its own weird thing with its own specific weird consequences. It’s a metaphor, because everything written in words is a metaphor. But it’s a better metaphor than thinking an electron is both a particle and a wave.

The Hidden Higgs

Peter Higgs, the theoretical physicist whose name graces the Higgs boson, died this week.

Peter Higgs, after the Higgs boson discovery was confirmed

This post isn’t an obituary: you can find plenty of those online, and I don’t have anything special to say that others haven’t. Reading the obituaries, you’ll notice they summarize Higgs’s contribution in different ways. Higgs was one of the people who proposed what today is known as the Higgs mechanism, the principle by which most (perhaps all) elementary particles gain their mass. He wasn’t the only one: Robert Brout and François Englert proposed essentially the same idea in a paper that was published two months earlier, in August 1964. Two other teams came up with the idea slightly later than that: Gerald Guralnik, Carl Richard Hagen, and Tom Kibble were published one month after Higgs, while Alexander Migdal and Alexander Polyakov found the idea independently in 1965 but couldn’t get it published till 1966.

Higgs did, however, do something that Brout and Englert didn’t. His paper doesn’t just propose a mechanism, involving a field which gives particles mass. It also proposes a particle one could discover as a result. Read the more detailed obituaries, and you’ll discover that this particle was not in the original paper: Higgs’s paper was rejected at first, and he added the discussion of the particle to make it more interesting.

At this point, I bet some of you are wondering what the big deal was. You’ve heard me say that particles are ripples in quantum fields. So shouldn’t we expect every field to have a particle?

Tell that to the other three Higgs bosons.

Electromagnetism has one type of charge, with two signs: plus, and minus. There are electrons, with negative charge, and their anti-particles, positrons, with positive charge.

Quarks have three types of charge, called colors: red, green, and blue. Each of these also has two “signs”: red and anti-red, green and anti-green, and blue and anti-blue. So for each type of quark (like an up quark), there are six different versions: red, green, and blue, and anti-quarks with anti-red, anti-green, and anti-blue.

Diagram of the colors of quarks

When we talk about quarks, we say that the force under which they are charged, the strong nuclear force, is an “SU(3)” force. The “S” and “U” there are shorthand for mathematical properties that are a bit too complicated to explain here, but the “(3)” is quite simple: it means there are three colors.

The Higgs boson’s primary role is to make the weak nuclear force weak, by making the particles that carry it from place to place massive. (That way, it takes too much energy for them to go anywhere, a feeling I think we can all relate to.) The weak nuclear force is an “SU(2)” force. So there should be two “colors” of particles that interact with the weak nuclear force…which includes Higgs bosons. For each, there should also be an anti-color, just like the quarks had anti-red, anti-green, and anti-blue. So we need two “colors” of Higgs bosons, and two “anti-colors”, for a total of four!

But the Higgs boson discovered at the LHC was a neutral particle. It didn’t have any electric charge, or any color. There was only one, not four. So what happened to the other three Higgs bosons?

The real answer is subtle, one of those physics things that’s tricky to concisely explain. But a partial answer is that they’re indistinguishable from the W and Z bosons.

Normally, the fundamental forces have transverse waves, with two polarizations. Light can wiggle along its path back and forth, or up and down, but it can’t wiggle forward and backward. A fundamental force with massive particles is different, because they can have longitudinal waves: they have an extra direction in which they can wiggle. There are two W bosons (plus and minus) and one Z boson, and they all get one more polarization when they become massive due to the Higgs.

That’s three new ways the W and Z bosons can wiggle. That’s the same number as the number of Higgs bosons that went away, and that’s no coincidence. We physicist like to say that the W and Z bosons “ate” the extra Higgs, which is evocative but may sound mysterious. Instead, you can think of it as the two wiggles being secretly the same, mixing together in a way that makes them impossible to tell apart.

The “count”, of how many wiggles exist, stays the same. You start with four Higgs wiggles, and two wiggles each for the precursors of the W+, W-, and Z bosons, giving ten. You end up with one Higgs wiggle, and three wiggles each for the W+, W-, and Z bosons, which still adds up to ten. But which fields match with which wiggles, and thus which particles we can detect, changes. It takes some thought to look at the whole system and figure out, for each field, what kind of particle you might find.

Higgs did that work. And now, we call it the Higgs boson.

What Are Particles? The Gentle Introduction

On this blog, I write about particle physics for the general public. I try to make things as simple as possible, but I do have to assume some things. In particular, I usually assume you know what particles are!

This time, I won’t do that. I know some people out there don’t know what a particle is, or what particle physicists do. If you’re a person like that, this post is for you! I’m going to give a gentle introduction to what particle physics is all about.

Let’s start with atoms.

Every object and substance around you, everything you can touch or lift or walk on, the water you drink and the air you breathe, all of these are made up of atoms. Some are simple: an iron bar is made of Iron atoms, aluminum foil is mostly Aluminum atoms. Some are made of combinations of atoms into molecules, like water’s famous H2O: each molecule has two Hydrogen atoms and one Oxygen atom. Some are made of more complicated mixtures: air is mostly pairs of Nitrogen atoms, with a healthy amount of pairs of Oxygen, some Carbon Dioxide (CO2), and many other things, while the concrete sidewalks you walk on have Calcium, Silicon, Aluminum, Iron, and Oxygen, all combined in various ways.

There is a dizzying array of different types of atoms, called chemical elements. Most occur in nature, but some are man-made, created by cutting-edge nuclear physics. They can all be organized in the periodic table of elements, which you’ve probably seen on a classroom wall.

The periodic table

The periodic table is called the periodic table because it repeats, periodically. Each element is different, but their properties resemble each other. Oxygen is a gas, Sulfur a yellow powder, Polonium an extremely radioactive metal…but just as you can find H2O, you can make H2S, and even H2Po. The elements get heavier as you go down the table, and more metal-like, but their chemical properties, the kinds of molecules you can make with them, repeat.

Around 1900, physicists started figuring out why the elements repeat. What they discovered is that each atom is made of smaller building-blocks, called sub-atomic particles. (“Sub-atomic” because they’re smaller than atoms!) Each atom has electrons on the outside, and on the inside has a nucleus made of protons and neutrons. Atoms of different elements have different numbers of protons and electrons, which explains their different properties.

Different atoms with different numbers of protons, neutrons, and electrons

Around the same time, other physicists studied electricity, magnetism, and light. These things aren’t made up of atoms, but it was discovered that they are all aspects of the same force, the electromagnetic force. And starting with Einstein, physicists figured out that this force has particles too. A beam of light is made up of another type of sub-atomic particle, called a photon.

For a little while then, it seemed that the universe was beautifully simple. All of matter was made of electrons, protons, and neutrons, while light was made of photons.

(There’s also gravity, of course. That’s more complicated, in this post I’ll leave it out.)

Soon, though, nuclear physicists started noticing stranger things. In the 1930’s, as they tried to understand the physics behind radioactivity and mapped out rays from outer space, they found particles that didn’t fit the recipe. Over the next forty years, theoretical physicists puzzled over their equations, while experimental physicists built machines to slam protons and electrons together, all trying to figure out how they work.

Finally, in the 1970’s, physicists had a theory they thought they could trust. They called this theory the Standard Model. It organized their discoveries, and gave them equations that could predict what future experiments would see.

In the Standard Model, there are two new forces, the weak nuclear force and the strong nuclear force. Just like photons for the electromagnetic force, each of these new forces has a particle. The general word for these particles is bosons, named after Satyendra Nath Bose, a collaborator of Einstein who figured out the right equations for this type of particle. The weak force has bosons called W and Z, while the strong force has bosons called gluons. A final type of boson, called the Higgs boson after a theorist who suggested it, rounds out the picture.

The Standard Model also has new types of matter particles. Neutrinos interact with the weak nuclear force, and are so light and hard to catch that they pass through nearly everything. Quarks are inside protons and neutrons: a proton contains one one down quark and two up quarks, while a neutron contains two down quarks and one up quark. The quarks explained all of the other strange particles found in nuclear physics.

Finally, the Standard Model, like the periodic table, repeats. There are three generations of particles. The first, with electrons, up quarks, down quarks, and one type of neutrino, show up in ordinary matter. The other generations are heavier, and not usually found in nature except in extreme conditions. The second generation has muons (similar to electrons), strange quarks, charm quarks, and a new type of neutrino called a muon-neutrino. The third generation has tauons, bottom quarks, top quarks, and tau-neutrinos.

(You can call these last quarks “truth quarks” and “beauty quarks” instead, if you like.)

Physicists had the equations, but the equations still had some unknowns. They didn’t know how heavy the new particles were, for example. Finding those unknowns took more experiments, over the next forty years. Finally, in 2012, the last unknown was found when a massive machine called the Large Hadron Collider was used to measure the Higgs boson.

The Standard Model

We think that these particles are all elementary particles. Unlike protons and neutrons, which are both made of up quarks and down quarks, we think that the particles of the Standard Model are not made up of anything else, that they really are elementary building-blocks of the universe.

We have the equations, and we’ve found all the unknowns, but there is still more to discover. We haven’t seen everything the Standard Model can do: to see some properties of the particles and check they match, we’d need a new machine, one even bigger than the Large Hadron Collider. We also know that the Standard Model is incomplete. There is at least one new particle, called dark matter, that can’t be any of the known particles. Mysteries involving the neutrinos imply another type of unknown particle. We’re also missing deeper things. There are patterns in the table, like the generations, that we can’t explain.

We don’t know if any one experiment will work, or if any one theory will prove true. So particle physicists keep working, trying to find new tricks and make new discoveries.

Neu-tree-no Detector

I’ve written before about physicists’ ideas for gigantic particle accelerators, proposals for machines far bigger than the Large Hadron Collider or even plans for a Future Circular Collider. The ideas ranged from wacky but not obviously impossible (a particle collider under the ocean) to pure science fiction (a beam of neutrinos that can blow up nukes across the globe).

But what if you don’t want to accelerate particles? What if, instead, you want to detect particles from the depths of space? Can you still propose ridiculously huge things?

Neutrinos are extremely hard to detect. Immune to the strongest forces of nature, they only interact via the weak nuclear force and gravity. The weakness of these forces means they can pass through huge amounts of material without disturbing a single atom. The Sudbury Neutrino Observatory used a tank of 1000 tonnes of water in order to stop enough neutrinos to study them. The IceCube experiment is bigger yet, and getting even bigger: their planned expansion will fill eight cubic kilometers of Antarctic ice with neutrino detectors, letting them measure around a million neutrinos every year.

But if you want to detect the highest-energy neutrinos, you may have to get even bigger than that. With so few of them to study, you need to cover a huge area with antennas to spot a decent number of them.

Or, maybe you can just use trees.

Pictured: a physics experiment?

That’s the proposal of Steven Prohira, a MacArthur Genius Grant winner who works as a professor at the University of Kansas. He suggests that, instead of setting up a giant array of antennas to detect high-energy neutrinos, trees could be used, with a coil of wire around the tree to measure electrical signals. Prohira even suggests that “A forest detector could also motivate the large-scale reforesting of land, to grow a neutrino detector for future generations”.

Despite sounding wacky, tree antennas have actually been used before. Militaries have looked into them as a way to set up antennas in remote locations, and later studies indicate they work surprisingly well. So the idea is not completely impossible, much like the “collider-under-the-sea”.

Like the “collider-under-the-sea”, though, some wackiness still remains. Prohira admits he hasn’t yet done all the work needed to test the idea’s feasibility, and comparing to mature experiments like IceCube makes it clear there is a lot more work to be done. Chatting with neutrino experts, one problem a few of them pointed out is that unlike devices sunk into Antarctic ice, trees are not uniformly spaced, and that might pose a problem if you want to measure neutrinos carefully.

What stands out to me, though, is that those questions are answerable. If the idea sounds promising, physicists can follow up. They can make more careful estimates, or do smaller-scale experiments. They won’t be stuck arguing over interpretations, or just building the full experiment and seeing if it works.

That’s the great benefit of a quantitative picture of the world. We can estimate some things very accurately, with theories that give very precise numbers for how neutrinos behave. Other things we can estimate less accurately, but still can work on: how tall trees are, how widely they are spaced, how much they vary. We have statistical tools and biological data. We can find numbers, and even better, we can know how uncertain we should be about those numbers. Because of that picture, we don’t need to argue fruitlessly about ideas like this. We can work out numbers, and check!

LHC Black Hole Reassurance: The Professional Version

A while back I wrote a post trying to reassure you that the Large Hadron Collider cannot create a black hole that could destroy the Earth. If you’re the kind of person who is worried about this kind of thing, you’ve probably heard a variety of arguments: that it hasn’t happened yet, despite the LHC running for quite some time, that it didn’t happen before the LHC with cosmic rays of comparable energy, and that a black hole that small would quickly decay due to Hawking radiation. I thought it would be nice to give a different sort of argument, a back-of-the-envelope calculation you can try out yourself, showing that even if a black hole was produced using all of the LHC’s energy and fell directly into the center of the Earth, and even if Hawking radiation didn’t exist, it would still take longer than the lifetime of the universe to cause any detectable damage. Modeling the black hole as falling through the Earth and just slurping up everything that falls into its event horizon, it wouldn’t even double in size before the stars burn out.

That calculation was extremely simple by physics standards. As it turns out, it was too simple. A friend of mine started thinking harder about the problem, and dug up this paper from 2008: Astrophysical implications of hypothetical stable TeV-scale black holes.

Before the LHC even turned on, the experts were hard at work studying precisely this question. The paper has two authors, Steve Giddings and Michelangelo Mangano. Giddings is an expert on the problem of quantum gravity, while Mangano is an expert on LHC physics, so the two are exactly the dream team you’d ask for to answer this question. Like me, they pretend that black holes don’t decay due to Hawking radiation, and pretend that one falls to straight from the LHC to the center of the Earth, for the most pessimistic possible scenario.

Unlike me, but like my friend, they point out that the Earth is not actually a uniform sphere of matter. It’s made up of particles: quarks arranged into nucleons arranged into nuclei arranged into atoms. And a black hole that hits a nucleus will probably not just slurp up an event horizon-sized chunk of the nucleus: it will slurp up the whole nucleus.

This in turn means that the black hole starts out growing much more fast. Eventually, it slows down again: once it’s bigger than an atom, it starts gobbling up atoms a few at a time until eventually it is back to slurping up a cylinder of the Earth’s material as it passes through.

But an atom-sized black hole will grow faster than an LHC-energy-sized black hole. How much faster is estimated in the Giddings and Mangano paper, and it depends on the number of dimensions. For eight dimensions, we’re safe. For fewer, they need new arguments.

Wait a minute, you might ask, aren’t there only four dimensions? Is this some string theory nonsense?

Kind of, yes. In order for the LHC to produce black holes, gravity would need to have a much stronger effect than we expect on subatomic particles. That requires something weird, and the most plausible such weirdness people considered at the time were extra dimensions. With extra dimensions of the right size, the LHC might have produced black holes. It’s that kind of scenario that Giddings and Mangano are checking: they don’t know of a plausible way for black holes to be produced at the LHC if there are just four dimensions.

For fewer than eight dimensions, though, they have a problem: the back-of-the-envelope calculation suggests black holes could actually grow fast enough to cause real damage. Here, they fall back on the other type of argument: if this could happen, would it have happened already? They argue that, if the LHC could produce black holes in this way, then cosmic rays could produce black holes when they hit super-dense astronomical objects, such as white dwarfs and neutron stars. Those black holes would eat up the white dwarfs and neutron stars, in the same way one might be worried they could eat up the Earth. But we can observe that white dwarfs and neutron stars do in fact exist, and typically live much longer than they would if they were constantly being eaten by miniature black holes. So we can conclude that any black holes like this don’t exist, and we’re safe.

If you’ve got a smattering of physics knowledge, I encourage you to read through the paper. They consider a lot of different scenarios, much more than I can summarize in a post. I don’t know if you’ll find it reassuring, since they may not cover whatever you happen to be worried about. But it’s a lot of fun seeing how the experts handle the problem.