Monthly Archives: May 2016

Mass Is Just Energy You Haven’t Met Yet

How can colliding two protons give rise to more massive particles? Why do vibrations of a string have mass? And how does the Higgs work anyway?

There is one central misunderstanding that makes each of these topics confusing. It’s something I’ve brought up before, but it really deserves its own post. It’s people not realizing that mass is just energy you haven’t met yet.

It’s quite intuitive to think of mass as some sort of “stuff” that things can be made out of. In our everyday experience, that’s how it works: combine this mass of flour and this mass of sugar, and get this mass of cake. Historically, it was the dominant view in physics for quite some time. However, once you get to particle physics it starts to break down.

It’s probably most obvious for protons. A proton has a mass of 938 MeV/c², or 1.6×10⁻²⁷ kg in less physicist-specific units. Protons are each made of three quarks, two up quarks and a down quark. Naively, you’d think that the quarks would have to be around 300 MeV/c². They’re not, though: up and down quarks both have masses less than 10 MeV/c². Those three quarks account for less than a fiftieth of a proton’s mass.

The “extra” mass is because a proton is not just three quarks. It’s three quarks interacting. The forces between those quarks, the strong nuclear force that binds them together, involves a heck of a lot of energy. And from a distance, that energy ends up looking like mass.

This isn’t unique to protons. In some sense, it’s just what mass is.

The quarks themselves get their mass from the Higgs field. Far enough away, this looks like the quarks having a mass. However, zoom in and it’s energy again, the energy of interaction between quarks and the Higgs. In string theory, mass comes from the energy of vibrating strings. And so on. Every time we run into something that looks like a fundamental mass, it ends up being just another energy of interaction.

If mass is just energy, what about gravity?

When you’re taught about gravity, the story is all about mass. Mass attracts mass. Mass bends space-time. What gets left out, until you actually learn the details of General Relativity, is that energy gravitates too.

Normally you don’t notice this, because mass contributes so much more to energy than anything else. That’s really what E=m is really about: it’s a unit conversion formula. It tells you that if you want to know how much energy a given mass “really is”, you multiply it by the speed of light squared. And that’s a large enough number that most of the time, when you notice energy gravitating, it’s because that energy looks like a big chunk of mass. (It’s also why physicists like silly units like MeV/c² for mass: we can just multiply by c² and get an energy!)

It’s really tempting to think about mass as a substance, of mass as always conserved, of mass as fundamental. But in physics we often have to toss aside our everyday intuitions, and this is no exception. Mass really is just energy. It’s just energy that we’ve “zoomed out” enough not to notice.

What Does It Mean to Know the Answer?

My sub-field isn’t big on philosophical debates. We don’t tend to get hung up on how to measure an infinite universe, or in arguing about how to interpret quantum mechanics. Instead, we develop new calculation techniques, which tends to nicely sidestep all of that.

If there’s anything we do get philosophical about, though, any question with a little bit of ambiguity, it’s this: What counts as an analytic result?

“Analytic” here is in contrast to “numerical”. If all we need is a number and we don’t care if it’s slightly off, we can use numerical methods. We have a computer use some estimation trick, repeating steps over and over again until we have approximately the right answer.

“Analytic”, then, refers to everything else. When you want an analytic result, you want something exact. Most of the time, you don’t just want a single number: you want a function, one that can give you numbers for whichever situation you’re interested in.

It might sound like there’s no ambiguity there. If it’s a function, with sines and cosines and the like, then it’s clearly analytic. If you can only get numbers out through some approximation, it’s numerical. But as the following example shows, things can get a bit more complicated.

Suppose you’re trying to calculate something, and you find the answer is some messy integral. Still, you’ve simplified the integral enough that you can do numerical integration and get some approximate numbers out. What’s more, you can express the integral as an infinite series, so that any finite number of terms will get close to the correct result. Maybe you even know a few special cases, situations where you plug specific numbers in and you do get an exact answer.

It might sound like you only know the answer numerically. As it turns out, though, this is roughly how your computer handles sines and cosines.

When your computer tries to calculate a sine or a cosine, it doesn’t have access to the exact solution all of the time. It does have some special cases, but the rest of the time it’s using an infinite series, or some other numerical trick. Type in a random sine into your calculator and it will be just as approximate as if you did a numerical integration.

So what’s the real difference?

Rather than how we get numbers out, think about what else we know. We know how to take derivatives of sines, and how to integrate them. We know how to take limits, and series expansions. And we know their relations to other functions, including how to express them in terms of other things.

If you can do that with your integral, then you’ve probably got an analytic result. If you can’t, then you don’t.

What if you have only some of the requirements, but not the others? What if you can take derivatives, but don’t know all of the identities between your functions? What if you can do series expansions, but only in some limits? What if you can do all the above, but can’t get numbers out without a supercomputer?

That’s where the ambiguity sets in.

In the end, whether or not we have the full analytic answer is a matter of degree. The closer we can get to functions that mathematicians have studied and understood, the better grasp we have of our answer and the more “analytic” it is. In practice, we end up with a very pragmatic approach to knowledge: whether we know the answer depends entirely on what we can do with it.

Particles Aren’t Vibrations (at Least, Not the Ones You Think)

You’ve probably heard this story before, likely from Brian Greene.

In string theory, the fundamental particles of nature are actually short lengths of string. These strings can vibrate, and like a string on a violin, that vibration is arranged into harmonics. The more energy in the string, the more complex the vibration. In string theory, each of these vibrations corresponds to a different particle, explaining how the zoo of particles we observe can come out of a single type of fundamental string.

250px-moodswingerscale-svg

Particles. Probably.

It’s a nice story. It’s even partly true. But it gives a completely wrong idea of where the particles we’re used to come from.

Making a string vibrate takes energy, and that energy is determined by the tension of the string. It’s a lot harder to wiggle a thick rubber band than a thin one, if you’re holding both tightly.

String theory’s strings are under a lot of tension, so it takes a lot of energy to make them vibrate. From our perspective, that energy looks like mass, so the more complicated harmonics on a string correspond to extremely massive particles, close to the Planck mass!

Those aren’t the particles you’re used to. They’re not electrons, they’re not dark matter. They’re particles we haven’t observed, and may never observe. They’re not how string theory explains the fundamental particles of nature.

So how does string theory go from one fundamental type of string to all of the particles in the universe, if not through these vibrations? As it turns out, there are several different ways it can happen, tricks that allow the lightest and simplest vibrations to give us all the particles we’ve observed.* I’ll describe a few.

The first and most important trick here is supersymmetry. Supersymmetry relates different types of particles to each other. In string theory, it means that along with vibrations that go higher and higher, there are also low-energy vibrations that behave like different sorts of particles. In a sense, string theory sticks a quantum field theory inside another quantum field theory, in a way that would make Xzibit proud.

Even with supersymmetry, string theory doesn’t give rise to all of the right sorts of particles. You need something else, like compactifications or branes.

The strings of string theory live in ten dimensions, it’s the only place they’re mathematically consistent. Since our world looks four-dimensional, something has to happen to the other six dimensions. They have to be curled up, in a process called compactification. There are lots and lots (and lots) of ways to do this compactification, and different ways of curling up the extra dimensions give different places for strings to move. These new options make the strings look different in our four-dimensional world: a string curled around a donut hole looks very different from one that moves freely. Each new way the string can move or vibrate can give rise to a new particle.

Another option to introduce diversity in particles is to use branes. Branes (short for membranes) are surfaces that strings can end on. If two strings end on the same brane, those ends can meet up and interact. If they end on different branes though, then they can’t. By cleverly arranging branes, then, you can have different sets of strings that interact with each other in different ways, reproducing the different interactions of the particles we’re familiar with.

In string theory, the particles we’re used to aren’t just higher harmonics, or vibrations with more and more energy. They come from supersymmetry, from compactifications and from branes. The higher harmonics are still important: there are theorems that you can’t fix quantum gravity with a finite number of extra particles, so the infinite tower of vibrations allows string theory to exploit a key loophole. They just don’t happen to be how string theory gets the particles of the Standard Model. The idea that every particle is just a higher vibration is a common misconception, and I hope I’ve given you a better idea of how string theory actually works.

 

*But aren’t these lightest vibrations still close to the Planck mass? Nope! See the discussion with TE in the comments for details.

Those Wacky 60’s Physicists

The 60’s were a weird time in academia. Psychologists were busy experimenting with LSD, seeing if they could convince people to electrocute each other, and otherwise doing the sorts of shenanigans that ended up saddling them with Institutional Review Boards so that nowadays they can’t even hand out surveys without a ten page form attesting that it won’t have adverse effects on pregnant women.

We don’t have IRBs in theoretical physics. We didn’t get quite as wacky as the psychologists did. But the 60’s were still a time of utopian dreams and experimentation, even in physics. We may not have done unethical experiments on people…but we did have the Analytic S-Matrix Program.

The Analytic S-Matrix Program was an attempt to rebuild quantum field theory from the ground up. The “S” in S-Matrix stands for “scattering”: the S-Matrix is an enormous matrix that tells you, for each set of incoming particles, the probability that they scatter into some new set of outgoing particles. Normally, this gets calculated piece by piece with what are called Feynman diagrams. The goal of the Analytic S-Matrix program was a loftier one: to derive the S-Matrix from first principles, without building it out of quantum field theory pieces. Without Feynman diagrams’ reliance on space and time, people like  Geoffrey Chew, Stanley Mandelstam, Tullio Regge, and Lev Landau hoped to reach a deeper understanding of fundamental physics.

If this sounds familiar, it should. Amplitudeologists like me view the physicists of the Analytic S-Matrix Program as our spiritual ancestors. Like us, they tried to skip the mess of Feynman diagrams, looking for mathematical tricks and unexpected symmetries to show them the way forward.

Unfortunately, they didn’t have the tools we do now. They didn’t understand the mathematical functions they needed, nor did they have novel ways of writing down their results like the amplituhedron. Instead, they had to work with what they knew, which in practice usually meant going back to Feynman diagrams.

Paradoxically then, much of the lasting impact of the Analytic S-Matrix Program has been on how we understand the results of Feynman diagram calculations. Just as psychologists learn about the Milgram experiment in school, we learn about Mandelstam variables and Regge trajectories. Recently, we’ve been digging up old concepts from those days and finding new applications, like the recent work on Landau singularities, or some as-yet unpublished work I’ve been doing.

Of course, this post wouldn’t be complete without mentioning the Analytic S-Matrix Program’s most illustrious child, String Theory. Some of the mathematics cooked up by the physicists of the 60’s, while dead ends for the problems they were trying to solve, ended up revealing a whole new world of potential.

The physicists of the 60’s were overly optimistic. Nevertheless, their work opened up questions that are still worth asking today. Much as psychologists can’t ignore what they got up to in the 60’s, it’s important for physicists to be aware of our history. You never know what you might dig up.

0521523362cvr.qxd (Page 1)

And as Levar Burton would say, you don’t have to take my word for it.