Author Archives: 4gravitons

Next Week, Amplitudes 2021!

I calculate things called scattering amplitudes, the building-blocks of predictions in particle physics. I’m part of a community of “amplitudeologists” that try to find better ways to compute these things, to achieve more efficiency and deeper understanding. We meet once a year for our big conference, called Amplitudes. And this year, I’m one of the organizers.

This year also happens to be the 100th anniversary of the founding of the Niels Bohr Institute, so we wanted to do something special. We found a group of artists working on a rendering of Niels Bohr. The original idea was to do one of those celebrity holograms, but after the conference went online we decided to make a few short clips instead. I wrote a Bohr-esque script, and we got help from one of Bohr’s descendants to get the voice just-so. Now, you can see the result, as our digital Bohr invites you to the conference.

We’ll be livestreaming the conference on the same YouTube channel, and posting videos of the talks each day. If you’re curious about the latest developments in scattering amplitudes, I encourage you to tune in. And if you’re an amplitudeologist yourself, registration is still open!

Of Cows and Razors

Last week’s post came up on Reddit, where a commenter made a good point. I said that one of the mysteries of neutrinos is that they might not get their mass from the Higgs boson. This is true, but the commenter rightly points out it’s true of other particles too: electrons might not get their mass from the Higgs. We aren’t sure. The lighter quarks might not get their mass from the Higgs either.

When talking physics with the public, we usually say that electrons and quarks all get their mass from the Higgs. That’s how it works in our Standard Model, after all. But even though we’ve found the Higgs boson, we can’t be 100% sure that it functions the way our model says. That’s because there are aspects of the Higgs we haven’t been able to measure directly. We’ve measured how it affects the heaviest quark, the top quark, but measuring its interactions with other particles will require a bigger collider. Until we have those measurements, the possibility remains open that electrons and quarks get their mass another way. It would be a more complicated way: we know the Higgs does a lot of what the model says, so if it deviates in another way we’d have to add more details, maybe even more undiscovered particles. But it’s possible.

If I wanted to defend the idea that neutrinos are special here, I would point out that neutrino masses, unlike electron masses, are not part of the Standard Model. For electrons, we have a clear “default” way for them to get mass, and that default is in a meaningful way simpler than the alternatives. For neutrinos, every alternative is complicated in some fashion: either adding undiscovered particles, or unusual properties. If we were to invoke Occam’s Razor, the principle that we should always choose the simplest explanation, then for electrons and quarks there is a clear winner. Not so for neutrinos.

I’m not actually going to make this argument. That’s because I’m a bit wary of using Occam’s Razor when it comes to questions of fundamental physics. Occam’s Razor is a good principle to use, if you have a good idea of what’s “normal”. In physics, you don’t.

To illustrate, I’ll tell an old joke about cows and trains. Here’s the version from The Curious Incident of the Dog in the Night-Time:

There are three men on a train. One of them is an economist and one of them is a logician and one of them is a mathematician. And they have just crossed the border into Scotland (I don’t know why they are going to Scotland) and they see a brown cow standing in a field from the window of the train (and the cow is standing parallel to the train). And the economist says, ‘Look, the cows in Scotland are brown.’ And the logician says, ‘No. There are cows in Scotland of which at least one is brown.’ And the mathematician says, ‘No. There is at least one cow in Scotland, of which one side appears to be brown.’

One side of this cow appears to be very fluffy.

If we want to be as careful as possible, the mathematician’s answer is best. But we expect not to have to be so careful. Maybe the economist’s answer, that Scottish cows are brown, is too broad. But we could imagine an agronomist who states “There is a breed of cows in Scotland that is brown”. And I suggest we should find that pretty reasonable. Essentially, we’re using Occam’s Razor: if we want to explain seeing a brown half-cow from a train, the simplest explanation would be that it’s a member of a breed of cows that are brown. It would be less simple if the cow were unique, a brown mutant in a breed of black and white cows. It would be even less simple if only one side of the cow were brown, and the other were another color.

When we use Occam’s Razor in this way, we’re drawing from our experience of cows. Most of the cows we meet are members of some breed or other, with similar characteristics. We don’t meet many mutant cows, or half-colored cows, so we think of those options as less simple, and less likely.

But what kind of experience tells us which option is simpler for electrons, or neutrinos?

The Standard Model is a type of theory called a Quantum Field Theory. We have experience with other Quantum Field Theories: we use them to describe materials, metals and fluids and so forth. Still, it seems a bit odd to say that if something is typical of these materials, it should also be typical of the universe. As another physicists in my sub-field, Nima Arkani-Hamed, likes to say, “the universe is not a crappy metal!”

We could also draw on our experience from other theories in physics. This is a bit more productive, but has other problems. Our other theories are invariably incomplete, that’s why we come up with new theories in the first place…and with so few theories, compared to breeds of cows, it’s unclear that we really have a good basis for experience.

Physicists like to brag that we study the most fundamental laws of nature. Ordinarily, this doesn’t matter as much as we pretend: there’s a lot to discover in the rest of science too, after all. But here, it really makes a difference. Unlike other fields, we don’t know what’s “normal”, so we can’t really tell which theories are “simpler” than others. We can make aesthetic judgements, on the simplicity of the math or the number of fields or the quality of the stories we can tell. If we want to be principled and forego all of that, then we’re left on an abyss, a world of bare observations and parameter soup.

If a physicist looks out a train window, will they say that all the electrons they see get their mass from the Higgs? Maybe, still. But they should be careful about it.

Lessons From Neutrinos, Part II

Last week I talked about the history of neutrinos. Neutrinos come in three types, or “flavors”. Electron neutrinos are the easiest: they’re produced alongside electrons and positrons in the different types of beta decay. Electrons have more massive cousins, called muon and tau particles. As it turns out, each of these cousins has a corresponding flavor of neutrino: muon neutrinos, and tau neutrinos.

For quite some time, physicists thought that all of these neutrinos had zero mass.

(If the idea of a particle with zero mass confuses you, think about photons. A particle with zero mass travels, like a photon, at the speed of light. This doesn’t make them immune to gravity: just as no light can escape a black hole, neither can any other massless particle. It turns out that once you take into account Einstein’s general theory of relativity, gravity cares about energy, not just mass.)

Eventually, physicists started to realize they were wrong, and neutrinos had a small non-zero mass after all. Their reason why might seem a bit strange, though. Physicists didn’t weigh the neutrinos, or measure their speed. Instead, they observed that different flavors of neutrinos transform into each other. We say that they oscillate: electron neutrinos oscillate into muon or tau neutrinos, which oscillate into the other flavors, and so on. Over time, a beam of electron neutrinos will become a beam of mostly tau and muon neutrinos, before becoming a beam of electron neutrinos again.

That might not sound like it has much to do with mass. To understand why it does, you’ll need to learn this post’s lesson:

Lesson 2: Mass is just How Particles Move

Oscillating particles seem like a weird sort of evidence for mass. What would be a more normal kind of evidence?

Those of you who’ve taken physics classes might remember the equation F=ma. Apply a known force to something, see how much it accelerates, and you can calculate its mass. If you’ve had a bit more physics, you’ll know that this isn’t quite the right equation to use for particles close to the speed of light, but that there are other equations we can use in a similar way. In particular, using relativity, we have E^2=p^2 c^2 + m^2 c^4. (At rest, p=0, and we have the famous E=mc^2). This lets us do the same kind of thing: give something a kick and see how it moves.

So let’s say we do that: we give a particle a kick, and measure it later. I’ll visualize this with a tool physicists use called a Feynman diagram. The line represents a particle traveling from one side to the other, from “kick” to “measurement”:

Because we only measure the particle at the end, we might miss if something happens in between. For example, it might interact with another particle or field, like this:

If we don’t know about this other field, then when we try to measure the particle’s mass we will include interactions like this. As it turns out, this is how the Higgs boson works: the Higgs field interacts with particles like electrons and quarks, changing how they move, so that they appear to have mass.

Quantum particles can do other things too. You might have heard people talk about one particle turning into a pair of temporary “virtual particles”. When people say that, they usually have a diagram in mind like this:

In particle physics, we need to take into account every diagram of this kind, every possible thing that could happen in between “kick” and measurement. The final result isn’t one path or another, but a sum of all the different things that could have happened in between. So when we measure the mass of a particle, we’re including every diagram that’s allowed: everything that starts with our “kick” and ends with our measurement.

Now what if our particle can transform, from one flavor to another?

Now we have a new type of thing that can happen in between “kick” and measurement. And if it can happen once, it can happen more than once:

Remember that, when we measure mass, we’re measuring a sum of all the things that can happen in between. That means our particle could oscillate back and forth between different flavors many many times, and we need to take every possibility into account. Because of that, it doesn’t actually make sense to ask what the mass is for one flavor, for just electron neutrinos or just muon neutrinos. Instead, mass is for the thing that actually moves: an average (actually, a quantum superposition) over all the different flavors, oscillating back and forth any number of times.

When a process like beta decay produces an electron neutrino, the thing that actually moves is a mix (again, a superposition) of particles with these different masses. Because each of these masses respond to their initial “kick” in different ways, you see different proportions of them over time. Try to measure different flavors at the end, and you’ll find different ones depending on when and where you measure. That’s the oscillation effect, and that’s why it means that neutrinos have mass.

It’s a bit more complicated to work out the math behind this, but not unreasonably so: it’s simpler than a lot of other physics calculations. Working through the math, we find that by measuring how long it takes neutrinos to oscillate we can calculate the differences between (squares of) neutrino masses. What we can’t calculate are the masses themselves. We know they’re small: neutrinos travel at almost the speed of light, and our cosmological models of the universe have surprisingly little room for massive neutrinos: too much mass, and our universe would look very different than it does today. But we don’t know much more than that. We don’t even know the order of the masses: you might assume electron neutrinos are on average lighter than muon neutrinos, which are lighter than tau neutrinos…but it could easily be the other way around! We also don’t know whether neutrinos get their mass from the Higgs like other particles do, or if they work in a completely different way.

Unlike other mysteries of physics, we’ll likely have the answer to some of these questions soon. People are already picking through the data from current experiments, seeing if they hint towards one order of masses or the other, or to one or the other way for neutrinos to get their mass. More experiments will start taking data this year, and others are expected to start later this decade. At some point, the textbooks may well have more “normal” mass numbers for each of the neutrinos. But until then, they serve as a nice illustration of what mass actually means in particle physics.

Lessons From Neutrinos, Part I

Some of the particles of the Standard Model are more familiar than others. Electrons and photons, of course, everyone has heard of, and most, though not all, have heard of quarks. Many of the rest, like the W and Z boson, only appear briefly in high-energy colliders. But one Standard Model particle is much less exotic, and nevertheless leads to all manner of confusion. That particle is the neutrino.

Neutrinos are very light, much lighter than even an electron. (Until relatively recently, we thought they were completely massless!) They have no electric charge and they don’t respond to the strong nuclear force, so aside from gravity (negligible since they’re so light), the only force that affects them is the weak nuclear force. This force is, well, weak. It means neutrinos can be produced via the relatively ordinary process of radioactive beta decay, but it also means they almost never interact with anything else. Vast numbers of neutrinos pass through you every moment, with no noticeable effect. We need enormous tanks of liquid or chunks of ice to have a chance of catching neutrinos in action.

Because neutrinos are both ordinary and unfamiliar, they tend to confuse people. I’d like to take advantage of this confusion to teach some physics. Neutrinos turn out to be a handy theme to convey a couple blog posts worth of lessons about why physics works the way it does.

I’ll start on the historical side. There’s a lesson that physicists themselves learned in the early days:

Lesson 1: Don’t Throw out a Well-Justified Conservation Law

In the early 20th century, physicists were just beginning to understand radioactivity. They could tell there were a few different types: gamma decay released photons in the form of gamma rays, alpha decay shot out heavy, positively charged particles, and beta decay made “beta particles”, or electrons. For each of these, physicists could track each particle and measure its energy and momentum. Everything made sense for gamma and alpha decay…but not for beta decay. Somehow, they could add up the energy of each of the particles they could track, and find less at the end than they did at the beginning. It was as if energy was not conserved.

These were the heady early days of quantum mechanics, so people were confused enough that many thought this was the end of the story. Maybe energy just isn’t conserved? Wolfgang Pauli, though, thought differently. He proposed that there had to be another particle, one that no-one could detect, that made energy balance out. It had to be neutral, so he called it the neutron…until two years later when James Chadwick discovered the particle we call the neutron. This was much too heavy to be Pauli’s neutron, so Edoardo Amaldi joked that Pauli’s particle was a “neutrino” instead. The name stuck, and Pauli kept insisting his neutrino would turn up somewhere. It wasn’t until 1956 that neutrinos were finally detected, so for quite a while people made fun of Pauli for his quixotic quest.

Including a Faust parody with Gretchen as the neutrino

In retrospect, people should probably have known better. Conservation of energy isn’t one of those rules that come out of nowhere. It’s deeply connected to time, and to the idea that one can perform the same experiment at any time in history and find the same result. While rules like that sometimes do turn out wrong, our first expectation should be that they won’t. Nowadays, we’re confident enough in energy conservation that we plan to use it to detect other particles: it was the main way the Large Hadron Collider planned to try to detect dark matter.

As we came to our more modern understanding, physicists started writing up the Standard Model. Neutrinos were thought of as massless, like photons, traveling at the speed of light. Now, we know that neutrinos have mass…but we don’t know how much mass they have. How do we know they have mass then? To understand that, you’ll need to understand what mass actually means in physics. We’ll talk about that next week!

The Winding Path of a Physics Conversation

In my line of work, I spend a lot of time explaining physics. I write posts here of course, and give the occasional public lecture. I also explain physics when I supervise Master’s students, and in a broader sense whenever I chat with my collaborators or write papers. I’ll explain physics even more when I start teaching. But of all the ways to explain physics, there’s one that has always been my favorite: the one-on-one conversation.

Talking science one-on-one is validating in a uniquely satisfying way. You get instant feedback, questions when you’re unclear and comprehension when you’re close. There’s a kind of puzzle to it, discovering what you need to fill in the gaps in one particular person’s understanding. As a kid, I’d chase this feeling with imaginary conversations: I’d plot out a chat with Democritus or Newton, trying to explain physics or evolution or democracy. It was a game, seeing how I could ground our modern understanding in concepts someone from history already knew.

Way better than Parcheesi

I’ll never get a chance in real life to explain physics to a Democritus or a Newton, to bridge a gap quite that large. But, as I’ve discovered over the years, everyone has bits and pieces they don’t yet understand. Even focused on the most popular topics, like black holes or elementary particles, everyone has gaps in what they’ve managed to pick up. I do too! So any conversation can be its own kind of adventure, discovering what that one person knows, what they don’t, and how to connect the two.

Of course, there’s fun in writing and public speaking too (not to mention, of course, research). Still, I sometimes wonder if there’s a career out there in just the part I like best: just one conversation after another, delving deep into one person’s understanding, making real progress, then moving on to the next. It wouldn’t be efficient by any means, but it sure sounds fun.

The Big Bang: What We Know and How We Know It

When most people think of the Big Bang, they imagine a single moment: a whole universe emerging from nothing. That’s not really how it worked, though. The Big Bang refers not to one event, but to a whole scientific theory. Using Einstein’s equations and some simplifying assumptions, we physicists can lay out a timeline for the universe’s earliest history. Different parts of this timeline have different evidence: some are meticulously tested, others we even expect to be wrong! It’s worth talking through this timeline and discussing what we know about each piece, and how we know it.

We can see surprisingly far back in time. As we look out into the universe, we see each star as it was when the light we see left it: longer ago the further the star is from us. Looking back, we see changes in the types of stars and galaxies: stars formed without the metals that later stars produced, galaxies made of those early stars. We see the universe become denser and hotter, until eventually we reach the last thing we can see: the cosmic microwave background, a faint light that fills our view in every direction. This light represents a change in the universe, the emergence of the first atoms. Before this, there were ions: free nuclei and electrons, forming a hot plasma. That plasma constantly emitted and absorbed light. As the universe cooled, the ions merged into atoms, and light was free to travel. Because of this, we cannot see back beyond this point. Our model gives detailed predictions for this curtain of light: its temperature, and even the ways it varies in intensity from place to place, which in turn let us hone our model further.

In principle, we could “see” a bit further. Light isn’t the only thing that travels freely through the universe. Neutrinos are almost massless, and pass through almost everything. Like the cosmic microwave background, the universe should have a cosmic neutrino background. This would come from much earlier, from an era when the universe was so dense that neutrinos regularly interacted with other matter. We haven’t detected this neutrino background yet, but future experiments might. Gravitational waves meanwhile, can also pass through almost any obstacle. There should be gravitational wave backgrounds as well, from a variety of eras in the early universe. Once again these haven’t been detected yet, but more powerful gravitational wave telescopes may yet see them.

We have indirect evidence a bit further back than we can see things directly. In the heat of the early universe the first protons and neutrons were merged via nuclear fusion, becoming the first atomic nuclei: isotopes of hydrogen, helium, and lithium. Our model lets us predict the proportions of these, how much helium and lithium per hydrogen atom. We can then compare this to the oldest stars we see, and see that the proportions are right. In this way, we know something about the universe from before we can “see” it.

We get surprised when we look at the universe on large scales, and compare widely separated regions. We find those regions are surprisingly similar, more than we would expect from randomness and the physics we know. Physicists have proposed different explanations for this. The most popular, cosmic inflation, suggests that the universe expanded very rapidly, accelerating so that a small region of similar matter was blown up much larger than the ordinary Big Bang model would have, projecting those similarities across the sky. While many think this proposal fits the data best, we still aren’t sure it’s the right one: there are alternate proposals, and it’s even controversial whether we should be surprised by the large-scale similarity in the first place.

We understand, in principle, how matter can come from “nothing”. This is sometimes presented as the most mysterious part of the Big Bang, the idea that matter could spontaneously emerge from an “empty” universe. But to a physicist, this isn’t very mysterious. Matter isn’t actually conserved, mass is just energy you haven’t met yet. Deep down, the universe is just a bunch of rippling quantum fields, with different ones more or less active at different times. Space-time itself is just another field, the gravitational field. When people say that in the Big Bang matter emerged from nothing, all they mean is that energy moved from the gravitational field to fields like the electron and quark, giving rise to particles. As we wind the model back, we can pretty well understand how this could happen.

If we extrapolate, winding Einstein’s equations back all the way, we reach a singularity: the whole universe, according to those equations, would have emerged from a single point, a time when everything was zero distance from everything else. This assumes, though, that Einstein’s equations keep working all the way back that far. That’s probably wrong, though. Einstein’s equations don’t include the effect of quantum mechanics, which should be much more important when the universe is at its hottest and densest. We don’t have a complete theory of quantum gravity yet (at least, not one that can model this), so we can’t be certain how to correct these equations. But in general, quantum theories tend to “fuzz out” singularities, spreading out a single point over a wider area. So it’s likely that the universe didn’t actually come from just a single point, and our various incomplete theories of quantum gravity tend to back this up.

So, starting from what we can see, we extrapolate back to what we can’t. We’re quite confident in some parts of the Big Bang theory: the emergence of the first galaxies, the first stars, the first atoms, and the first elements. Back far enough and things get more mysterious, we have proposals but no definite answers. And if you try to wind back up to the beginning, you find we still don’t have the right kind of theory to answer the question. That’s a task for the future.

Digging for Buried Insight

The scientific method, as we usually learn it, starts with a hypothesis. The scientist begins with a guess, and asks a question with a clear answer: true, or false? That guess lets them design an experiment, observe the consequences, and improve our knowledge of the world.

But where did the scientist get the hypothesis in the first place? Often, through some form of exploratory research.

Exploratory research is research done, not to answer a precise question, but to find interesting questions to ask. Each field has their own approach to exploration. A psychologist might start with interviews, asking broad questions to find narrower questions for a future survey. An ecologist might film an animal, looking for changes in its behavior. A chemist might measure many properties of a new material, seeing if any stand out. Each approach is like digging for treasure, not sure of exactly what you will find.

Mathematicians and theoretical physicists don’t do experiments, but we still need hypotheses. We need an idea of what we plan to prove, or what kind of theory we want to build: like other scientists, we want to ask a question with a clear, true/false answer. And to find those questions, we still do exploratory research.

What does exploratory research look like, in the theoretical world? Often, it begins with examples and calculations. We can start with a known method, or a guess at a new one, a recipe for doing some specific kind of calculation. Recipe in hand, we proceed to do the same kind of calculation for a few different examples, covering different sorts of situation. Along the way, we notice patterns: maybe the same steps happen over and over, or the result always has some feature.

We can then ask, do those same steps always happen? Does the result really always have that feature? We have our guess, our hypothesis, and our attempt to prove it is much like an experiment. If we find a proof, our hypothesis was true. On the other hand, we might not be able to find a proof. Instead, exploring, we might find a counterexample – one where the steps don’t occur, the feature doesn’t show up. That’s one way to learn that our hypothesis was false.

This kind of exploration is essential to discovery. As scientists, we all have to eventually ask clear yes/no questions, to submit our beliefs to clear tests. But we can’t start with those questions. We have to dig around first, to observe the world without a clear plan, to get to a point where we have a good question to ask.

A Few Advertisements

A couple different things that some of you might like to know about:

Are you an amateur with an idea you think might revolutionize all of physics? If so, absolutely do not contact me about it. Instead, you can talk to these people. Sabine Hossenfelder runs a service that will hook you up with a scientist who will patiently listen to your idea and help you learn what you need to develop it further. They do charge for that service, and they aren’t cheap, so only do this if you can comfortably afford it. If you can’t, then I have some advice in a post here. Try to contact people who are experts in the specific topic you’re working on, ask concrete questions that you expect to give useful answers, and be prepared to do some background reading.

Are you an undergraduate student planning for a career in theoretical physics? If so, consider the Perimeter Scholars International (PSI) master’s program. Located at the Perimeter Institute in Waterloo, Canada, PSI is an intense one-year boot-camp in theoretical physics, teaching the foundational ideas you’ll need for the rest of your career. It’s something I wish I was aware of when I was applying for schools at that age. Theoretical physics is a hard field, and a big part of what makes it hard is all the background knowledge one needs to take part in it. Starting work on a PhD with that background knowledge already in place can be a tremendous advantage. There are other programs with similar concepts, but I’ve gotten a really good impression of PSI specifically so it’s them I would recommend. Note that applications for the new year aren’t open yet: I always plan to advertise them when they open, and I always forget. So consider this an extremely-early warning.

Are you an amplitudeologist? Registration for Amplitudes 2021 is now live! We’re doing an online conference this year, co-hosted by the Niels Bohr Institute and Penn State. We’ll be doing a virtual poster session, so if you want to contribute to that please include a title and abstract when you register. We also plan to stream on YouTube, and will have a fun online surprise closer to the conference date.

Light and Lens, Collider and Detector

Why do particle physicists need those enormous colliders? Why does it take a big, expensive, atom-smashing machine to discover what happens on the smallest scales?

A machine like the Large Hadron Collider seems pretty complicated. But at its heart, it’s basically just a huge microscope.

Familiar, right?

If you’ve ever used a microscope in school, you probably had one with a light switch. Forget to turn on the light, and you spend a while confused about why you can’t see anything before you finally remember to flick the switch. Just like seeing something normally, seeing something with a microscope means that light is bouncing off that thing and hitting your eyes. Because of this, microscopes are limited by the wavelength of the light that they use. Try to look at something much smaller than that wavelength and the image will be too blurry to understand.

To see smaller details then, people use light with smaller wavelengths. Using massive X-ray producing machines called synchrotrons, scientists can study matter on the sub-nanometer scale. To go further, scientists can take advantage of wave-particle duality, and use electrons instead of light. The higher the energy of the electrons, the smaller their wavelength. The best electron microscopes can see objects measured in angstroms, not just nanometers.

Less familiar?

A particle collider pushes this even further. The Large Hadron Collider accelerates protons until they have 6.5 Tera-electron-Volts of energy. That might be an unfamiliar type of unit, but if you’ve seen it before you can run the numbers, and estimate that this means the LHC can sees details below the attometer scale. That’s a quintillionth of a meter, or a hundred million times smaller than an atom.

A microscope isn’t just light, though, and a collider isn’t just high-energy protons. If it were, we could just wait and look at the sky. So-called cosmic rays are protons and other particles that travel to us from outer space. These can have very high energy: protons with similar energy to those in the LHC hit our atmosphere every day, and rays have been detected that were millions of times more powerful.

People sometimes ask why we can’t just use these cosmic rays to study particle physics. While we can certainly learn some things from cosmic rays, they have a big limitation. They have the “light” part of a microscope, but not the “lens”!

A microscope lens magnifies what you see. Starting from a tiny image, the lens blows it up until it’s big enough that you can see it with your own eyes. Particle colliders have similar technology, using their particle detectors. When two protons collider inside the LHC, they emit a flurry of other particles: photons and electrons, muons and mesons. Each of these particles is too small to see, let alone distinguish with the naked eye. But close to the collision there are detector machines that absorb these particles and magnify their signal. A single electron hitting one of these machines triggers a cascade of more and more electrons, in proportion to the energy of the electron that entered the machine. In the end, you get a strong electrical signal, which you can record with a computer. There are two big machines that do this at the Large Hadron Collider, each with its own independent scientific collaboration to run it. They’re called ATLAS and CMS.

The different layers of the CMS detector, magnifying signals from different types of particles.

So studying small scales needs two things: the right kind of “probe”, like light or protons, and a way to magnify the signal, like a lens or a particle detector. That’s hard to do without a big expensive machine…unless nature is unusually convenient. One interesting possibility is to try to learn about particle physics via astronomy. In the Big Bang particles collided with very high energy, and as the universe has expanded since then those details have been magnified across the sky. That kind of “cosmological collider” has the potential to teach us about physics at much smaller scales than any normal collider could reach. A downside is that, unlike in a collider, we can’t run the experiment over and over again: our “cosmological collider” only ran once. Still, if we want to learn about the very smallest scales, some day that may be our best option.

Who Is, and Isn’t, Counting Angels on a Pinhead

How many angels can dance on the head of a pin?

It’s a question famous for its sheer pointlessness. While probably no-one ever had that exact debate, “how many angels fit on a pin” has become a metaphor, first for a host of old theology debates that went nowhere, and later for any academic study that seems like a waste of time. Occasionally, physicists get accused of doing this: typically string theorists, but also people who debate interpretations of quantum mechanics.

Are those accusations fair? Sometimes yes, sometimes no. In order to tell the difference, we should think about what’s wrong, exactly, with counting angels on the head of a pin.

One obvious answer is that knowing the number of angels that fit on a needle’s point is useless. Wikipedia suggests that was the origin of the metaphor in the first place, a pun on “needle’s point” and “needless point”. But this answer is a little too simple, because this would still be a useful debate if angels were real and we could interact with them. “How many angels fit on the head of a pin” is really a question about whether angels take up space, whether two angels can be at the same place at the same time. Asking that question about particles led physicists to bosons and fermions, which among other things led us to invent the laser. If angelology worked, perhaps we would have angel lasers as well.

Be not afraid of my angel laser

“If angelology worked” is key here, though. Angelology didn’t work, it didn’t lead to angel-based technology. And while Medieval people couldn’t have known that for certain, maybe they could have guessed. When people accuse academics of “counting angels on the head of a pin”, they’re saying they should be able to guess that their work is destined for uselessness.

How do you guess something like that?

Well, one problem with counting angels is that nobody doing the counting had ever seen an angel. Counting angels on the head of a pin implies debating something you can’t test or observe. That can steer you off-course pretty easily, into conclusions that are either useless or just plain wrong.

This can’t be the whole of the problem though, because of mathematics. We rarely accuse mathematicians of counting angels on the head of a pin, but the whole point of math is to proceed by pure logic, without an experiment in sight. Mathematical conclusions can sometimes be useless (though we can never be sure, some ideas are just ahead of their time), but we don’t expect them to be wrong.

The key difference is that mathematics has clear rules. When two mathematicians disagree, they can look at the details of their arguments, make sure every definition is as clear as possible, and discover which one made a mistake. Working this way, what they build is reliable. Even if it isn’t useful yet, the result is still true, and so may well be useful later.

In contrast, when you imagine Medieval monks debating angels, you probably don’t imagine them with clear rules. They might quote contradictory bible passages, argue everyday meanings of words, and win based more on who was poetic and authoritative than who really won the argument. Picturing a debate over how many angels can fit on the head of a pin, it seems more like Calvinball than like mathematics.

This then, is the heart of the accusation. Saying someone is just debating how many angels can dance on a pin isn’t merely saying they’re debating the invisible. It’s saying they’re debating in a way that won’t go anywhere, a debate without solid basis or reliable conclusions. It’s saying, not just that the debate is useless now, but that it will likely always be useless.

As an outsider, you can’t just dismiss a field because it can’t do experiments. What you can and should do, is dismiss a field that can’t produce reliable knowledge. This can be hard to judge, but a key sign is to look for these kinds of Calvinball-style debates. Do people in the field seem to argue the same things with each other, over and over? Or do they make progress and open up new questions? Do the people talking seem to be just the famous ones? Or are there cases of young and unknown researchers who happen upon something important enough to make an impact? Do people just list prior work in order to state their counter-arguments? Or do they build on it, finding consequences of others’ trusted conclusions?

A few corners of string theory do have this Calvinball feel, as do a few of the debates about the fundamentals of quantum mechanics. But if you look past the headlines and blogs, most of each of these fields seems more reliable. Rather than interminable back-and-forth about angels and pinheads, these fields are quietly accumulating results that, one way or another, will give people something to build on.