Category Archives: General QFT

Made of Energy, or Made of Nonsense?

I did a few small modifications to the blog settings this week. Comments now support Markdown, reply-chains in the comments can go longer, and there are a few more sharing buttons on the posts. I’m gearing up to do a more major revamp of the blog in July for when the name changes over from 4 gravitons and a grad student to just 4 gravitons.

io9 did an article recently on scientific ideas that scientists wish the public would stop misusing. They’ve got a lot of good ones (Proof, Quantum, Organic), but they somehow managed to miss one of the big ones: Energy. Matt Strassler has a nice, precise article on this particular misconception, but nonetheless I think it’s high time I wrote my own.

There’s a whole host of misconceptions regarding energy. Some of them are simple misuses of language, like zero-calorie energy drinks:

Zero Purpose

Energy can be measured in several different units. You can use Joules, or electron-Volts, or dynes…or calories. Calories are a measure of energy, so zero calories quite literally means zero energy.

Now, that’s not to say the makers of zero calorie energy drinks are lying. They’re just using a different meaning of energy from the scientific one. Their drinks give you vim and vigor, the get-up-and-go required to make money playing computer games. For most of the public, that “get-up-and-go” is called energy, even if scientifically it’s not.

That’s not really a misconception, more of an amusing use of language. This next one though really makes my blood boil.

Raise your hand if you’ve seen a Sci-Fi movie or TV show where some creature is described as being made of “pure energy”. Whether they’re peaceful, ultra-advanced ascended beings, or genocidal maniacs from another dimension, the concept of creatures made of “pure energy” shows up again and again and again.

You can’t fight the Drej, they’re pure bullshit!

Even if you aren’t the type to take Sci-Fi technobabble seriously, you’ve probably heard that matter and antimatter annihilate to form energy, or that photons are made out of energy. These sound more reasonable, but they rest on the same fundamental misconception:

Nothing is “made out of energy”.

Rather,

Energy is a property that things have.

Energy isn’t a substance, it isn’t a fluid, it isn’t some kind of nebulous stuff you can make into an indestructible alien body. Things have energy, but nothing is energy.

What about light, then? And what happens when antimatter collides with matter?

Light, just like anything else, has energy. The difference between light and most other things is that light also does not have mass.

In everyday life, we like to think of mass as some sort of basic “stuff”. If things are “made out of mass” or “made out of matter”, and something like light doesn’t have mass, then it must be made out of some other “stuff”, right?

The thing is, mass isn’t really “stuff” any more than energy is. Just like energy, mass is a property that things have. In fact, as I’ve talked about some before, mass is really just a type of energy. Specifically, mass is the energy something has when left alone and at rest. That’s the meaning of Einstein’s famous equation, E equals m c squared: it tells you how to take a known mass and calculate the rest energy that it implies.

Lots of hype for a unit conversion formula, huh?

In the case of light, all of its energy can be thought of in terms of its (light-speed) motion, so it has no mass. That might tempt you to think of it as being “made of energy”, but really, you and light are not so different.

You are made of atoms, and atoms are made of protons, neutrons, and electrons. Let’s consider a proton. A proton’s mass, expressed in the esoteric units physicists favor, is 938 Mega-electron-Volts. That’s how much energy a proton has alone and and rest. A proton is made of three quarks, so you’d think that they would contribute most of its mass. In reality, though, the quarks in protons have masses of only a few Mega-electron-Volts. Most of a proton’s mass doesn’t come from the mass of the quarks.

Quarks interact with each other via the strong nuclear force, the strongest fundamental force in existence. That interaction has a lot of energy, and when viewed from a distance that energy contributes almost all of the proton’s mass. So if light is “made of energy”, so are you.

So why do people say that matter and anti-matter annihilate to make energy?

A matter particle and its anti-matter partner are opposite in a lot of ways. In particular, they have opposite charges: not just electric charge, but other types of charge too.

Charge must be conserved, so if a particle collides with its anti-particle the result has a total charge of zero, as the opposite charges of the two cancel each other out. Light has zero charge, so it’s one of the most common results of a matter-antimatter collision. When people say that matter and antimatter produce “pure energy”, they really just mean that they produce light.

So next time someone says something is “made of energy”, be wary. Chances are, they aren’t talking about something fully scientific.

Particles are not Species

It has been estimated that there are 7.5 million undiscovered species of animals, plants and fungi. Most of these species are insects. If someone wanted billions of dollars to search the Amazon rainforest with the goal of cataloging every species of insect, you’d want them to have a pretty good reason. Maybe they are searching for genes that could cure diseases, or trying to understand why an ecosystem is dying.

The primary goal of the Large Hadron Collider is to search for new subatomic particles. If we’re spending billions searching for these things, they must have some use, right? After all, it’s all well and good knowing about a bunch of different particles, but there must be a whole lot of sorts of particles out there, at least if you judge by science fiction (these two are also relevant). Surely we could just focus on finding the useful ones, and ignore the rest?

The thing is, particle physics isn’t like that. Particles aren’t like insects, you don’t find rare new types scattered in out-of-the-way locations. That’s because each type of particle isn’t like a species of animal. Instead, each particle is a fundamental law of nature.

Move over Linnaeus.

Move over Linnaeus.

It wasn’t always like this. In the late 50’s and early 60’s, particle accelerators were producing a zoo of new particles with no clear rhyme or reason, and it looked like they would just keep producing more. That impression changed when Murray Gell-Mann proposed his Eightfold Way, which led to the development of the quark model. He explained the mess of new particles in terms of a few fundamental particles, the quarks, which made up the more complicated particles that were being discovered.

Nowadays, the particles that we’re trying to discover aren’t, for the most part, the zoo of particles of yesteryear. Instead, we’re looking for new fundamental particles.

What makes a particle fundamental?

The new particles of the early 60’s were a direct consequence of the existence of quarks. Once you understood how quarks worked, you could calculate the properties of all of the new particles, and even predict ones that hadn’t been found yet.

By contrast, fundamental particles aren’t based on any other particles, and you can’t predict everything about them. When we discover a new fundamental particle like the Higgs boson, we’re discovering a new, independent law of nature. Each fundamental particle is a law that states, across all of space and time, “if this happens, make this particle”. It’s a law that holds true always and everywhere, regardless of how often the particle is actually produced.

Think about the laws of physics like the cockpit of a plane. In front of the pilot is a whole mess of controls, dials and switches and buttons. Some of those controls are used every flight, some much more rarely. There are probably buttons on that plane that have never been used. But if a single button is out of order, the plane can’t take off.

Each fundamental particle is like a button on that plane. Some turn “on” all the time, while some only turn “on” in special circumstances. But each button is there all the same, and if you’re missing one, your theory is incomplete. It may agree with experiments now, but eventually you’re going to run into problems of one sort or another that make your theory inconsistent.

The point of discovering new particles isn’t just to find the one that will give us time travel or let us blow up Vulcan. Technological applications would be nice, but the real point is deeper: we want to know how reality works, and for every new fundamental particle we discover, we’ve found out a fact that’s true about the whole universe.

A Wild Infinity Appears! Or, Renormalization

Back when Numberphile’s silly video about the zeta function came up, I wrote a post explaining the process of regularization, where physicists take an incorrect infinite result and patch it over to get something finite. At the end of that post I mentioned a particular variant of regularization, called renormalization, which was especially important in quantum field theory.

Renormalization has to do with how we do calculations and make predictions in particle physics. If you haven’t read my post “What’s so hard about Quantum Field Theory anyway?” you should read it before trying to tackle this one. The important concepts there are that probabilities in particle physics are calculated using Feynman Diagrams, that those diagrams consist of lines representing particles and points representing the ways they interact, that each line and point in the diagram gives a number that must be plugged in to the calculation, and that to do the full calculation you have to add up all the possible diagrams you can draw.

Let’s say you’re interested in finding out the mass of a particle. How about the Higgs?

You can’t weigh it, or otherwise see how gravity affects it: it’s much too light, and decays into other particles much too fast. Luckily, there is another way. As I mentioned in this post, a particle’s mass and its kinetic energy (energy of motion) both contribute to its total energy, which in turn affects what particles it can turn into if it decays. So if you want to find a particle’s mass, you need the relationship between its motion and its energy.

Suppose we’ve got a Higgs particle moving along. We know it was created out of some collision, and we know what it decays into at the end. With that, we can figure out its mass.

higgstree

There’s a problem here, though: we only know what happens at the beginning and the end of this diagram. We can’t be certain what happens in the middle. That means we need to add in all of the other diagrams, every possible diagram with that beginning and that end.

Just to look at one example, suppose the Higgs particle splits into a quark and an anti-quark (the antimatter version of the quark). If they come back together later into a Higgs, the process would look the same from the outside. Here’s the diagram for it:

higgsloop

When we’re “measuring the Higgs mass”, what we’re actually measuring is the sum of every single diagram that begins with the creation of a Higgs and ends with it decaying.

Surprisingly, that’s not the problem!

The problem comes when you try to calculate the number that comes out of that diagram, when the Higgs splits into a quark-antiquark pair. According to the rules of quantum field theory, those quarks don’t have to obey the normal relationship between total energy, kinetic energy, and mass. They can have any kinetic energy at all, from zero all the way up to infinity. And because it’s quantum field theory, you have to add up all of those possible kinetic energies, all the way up. In this case, the diagram actually gives you infinity.

(Note that not every diagram with unlimited kinetic energy is going to be infinite. The first time theorists calculated infinite diagrams, they were surprised.

For those of you who know calculus, the problem here comes after you integrate over momentum. The two quarks each give a factor of one over the momentum, and then you integrate the result four times (for three dimensions of space plus time), which gives an infinite result. If you had different particles arranged in a different way you might divide by more factors of momentum and get a finite value.)

The modern understanding of infinite results like this is that they arise from our ignorance. The mass of the Higgs isn’t actually infinity, because we can’t just add up every kinetic energy up to infinity. Instead, at some point before we get to infinity “something else” happens.

We don’t know what that “something else” is. It might be supersymmetry, it might be something else altogether. Whatever it is, we don’t know enough about it now to include it in the calculations as anything more than a cutoff, a point beyond which “something” happens. A theory with a cutoff like this, one that is only “effective” below a certain energy, is called an Effective Field Theory.

While we don’t know what happens at higher energies, we still need a way to complete our calculations if we want to use them in the real world. That’s where renormalization comes in.

When we use renormalization, we bring in experimental observations. We know that, no matter what is contributing to the Higgs particle’s mass, what we observe in the real world is finite. “Something” must be canceling the divergence, so we simply assume that “something” does, and that the final result agrees with the experiment!

"Something"

“Something”

In order to do this, we accepted the experimental result for the mass of the Higgs. That means that we’ve lost any ability to predict the mass from our theory. This is a general rule for renormalization: we trade ignorance (of the “something” that happens at high energy) for a loss of predictability.

If we had to do this for every calculation, we couldn’t predict anything at all. Luckily, for many theories (called renormalizable theories) there are theorems proving that you only need to do this a few times to fix the entire theory. You give up the ability to predict the results of a few experiments, but you gain the ability to predict the rest.

Luckily for us, the Standard Model is a renormalizable theory. Unfortunately, some important theories are not. In particular, quantum gravity is non-renormalizable. In order to fix the infinities in quantum gravity, you need to do the renormalization trick an infinite number of times, losing an infinite amount of predictability. Thus, while making a theory of quantum gravity is not difficult in principle, in practice the most obvious way to create the theory results in a “theory” that can never make any predictions.

One of the biggest virtues of string theory (some would say its greatest virtue) is that these infinities never appear. You never need to renormalize string theory in this way, which is what lets it work as a theory of quantum gravity. N=8 supergravity, the gravity cousin of N=4 super Yang-Mills, might also have this handy property, which is why many people are so eager to study it.

Elegance, Not So Mysterious

You’ll often hear theoretical physicists in the media referring to one theory or another as “elegant”. String theory in particular seems to get this moniker fairly frequently.

It may often seem like mathematical elegance is some sort of mysterious sixth sense theorists possess, as inexplicable to the average person as color to a blind person. What’s “elegant” about string theory, after all?

Before explaining elegance, I should take a bit of time to say what it’s not. Elegance isn’t Occam’s razor. It isn’t naturalness, either. Both of those concepts have their own technical definitions.

Elegance, by contrast, is a much hazier, and yet much simpler, notion. It’s hazy enough that any definition could provoke arguments, but I can at least give you an approximate idea by telling you that an elegant theory is simple to describe, if you know the right terms. Often, it is simpler than the phenomenon that it explains.

How does this apply to something like string theory? String theory seems to be incredibly complicated: ten dimensions, curled up in a truly vast number of different ways, giving rise to whole spectrums of particles.

That said, the basic idea is quite simple. String theory asks the question: what if, in addition to fundamental point-particles (zero dimensional objects), there were fundamental objects of other dimensions? That idea leads to complicated consequences: if your theory is going to produce all the particles of the real world then you need the ten dimensions and the supersymmetry and yadda yadda. But the basic idea is simple to describe. An elegant theory can have very complicated consequences, but still be simple to describe.

This, broadly, is the sort of explanation theoretical physicists look for. Math is the kind of field where the same basic systems can describe very complex phenomena. Since theoretical physics is about describing the world in terms of math, the right explanation is usually the most elegant.

This can occasionally trip physicists up when they migrate to other careers. In biology, for example, the elegant solution is often not the right one, because evolution doesn’t care about elegance: evolution just grabs whatever is within reach. Financial systems and economics occasionally have similar problems. All this is to say that while elegance is an important thing for a physicist to strive for, sometimes we have to be careful about it.

High Energy? What does that mean?

I am a high energy physicist who uses the high energy and low energy limits of a theory that, while valid up to high energies, is also a low-energy description of what at high energies ends up being string theory (string theorists, of course, being high energy physicists as well).

If all of that makes no sense to you, congratulations, you’ve stumbled upon one of the worst-kept secrets of theoretical physics: we really could use a thesaurus.

“High energy” means different things in different parts of physics. In general, “high” versus “low” energy classifies what sort of physics you look at. “High” energy physics corresponds to the very small, while “low” energies encompass larger structures. Many people explain this via quantum mechanics: the uncertainty principle says that the more certain you are of a particle’s position, the less certain you can be of how fast it is going, which would imply that a particle that is highly restricted in location might have very high energy. You can also understand it without quantum mechanics, though: if two things are held close together, it generally has to be by a powerful force, so the bond between them will contain more energy. Another perspective is in terms of light. Physicists will occasionally use “IR”, or infrared, to mean “low energy” and “UV”, or ultraviolet, to mean “high energy”. Infrared light has long wavelengths and low energy photons, while ultraviolet light has short wavelengths and high energy photons, so the analogy is apt. However, the analogy only goes so far, since “UV physics” is often at energies much greater than those of UV light (and the same sort of situation applies for IR).

So what does “low energy” or “high energy” mean? Well…

The IR limit: Lowest of the “low energy” points, this refers to the limit of infinitely low energy. While you might compare it to “absolute zero”, really it just refers to energy that’s so low that compared to the other energies you’re calculating with it might as well be zero. This is the “low energy limit” I mentioned in the opening sentence.

Low energy physics: Not “high energy physics”. Low energy physics covers everything from absolute zero up to atoms. Once you get up to high enough energy to break up the nucleus of an atom, you enter…

High energy physics: Also known as “particle physics”, high energy physics refers to the study of the subatomic realm, which also includes objects which aren’t technically particles like strings and “branes”. If you exclude nuclear physics itself, high energy physics generally refers to energies of a mega-electron-volt and up. For comparison, the electrons in atoms are bound by energies of around an electron-volt, which is the characteristic energy of chemistry, so high energy physics is at least a million times more energetic. That said, high energy physicists are often interested in low energy consequences of their theories, including all the way down to the IR limit. Interestingly, by this point we’ve already passed both infrared light (from a thousandth of an electron-volt to a single electron volt) and ultraviolet light (several electron-volts to a hundred or so). Compared to UV light, mega-electron volt scale physics is quite high energy.

The TeV scale: If you’re operating a collider though, mega-electron-volts (or MeV) are low-energy physics. Often, calculations for colliders will assume that quarks, whose masses are around the MeV scale, actually have no mass at all! Instead, high energy for particle colliders means giga (billion) or tera (trillion) electron volt processes. The LHC, for example, operates at around 7 TeV now, with 14 TeV planned. This is the range of scales where many had hoped to see supersymmetry, but as time has gone on results have pushed speculation up to higher and higher energies. Of course, these are all still low energy from the perspective of…

The string scale: Strings are flexible, but under enormous tension that keeps them very very short. Typically, strings are posed to be of length close to the Planck length, the characteristic length at which quantum effects become relevant for gravity. This enormously small length corresponds to the enormously large Planck energy, which is on the order of 1028 electron-volts. That’s about ten to the sixteen times the energies of the particles at the LHC, or ten to the twenty-two times the MeV scale that I called “high energy” earlier. For comparison, there are about ten to the twenty-two atoms in a milliliter of water. When extra dimensions in string theory are curled up, they’re usually curled up at this scale. This means that from a string theory perspective, going to the TeV scale means ignoring the high energy physics and focusing on low energy consequences, which is why even the highest mass supersymmetric particles are thought of as low energy physics when approached from string theory.

The UV limit: Much as the IR limit is that of infinitely low energy, the UV limit is the formal limit of infinitely high energy. Again, it’s not so much an actual destination, as a comparative point where the energy you’re considering is much higher than the energy of anything else in your calculation.

These are the definitions of “high energy” and “low energy”, “UV” and “IR” that one encounters most often in theoretical particle physics and string theory. Other parts of physics have their own idea of what constitutes high or low energy, and I encourage you to ask people who study those parts of physics if you’re curious.

Model-Hypothesis-Experiment: Sure, Just Not All the Same Person!

At some point, we were all taught how science works.

The scientific method gets described differently in different contexts, but it goes something like this:

First, a scientist proposes a model, a potential explanation for how something out in the world works. They then create a hypothesis, predicting some unobserved behavior that their model implies should exist. Finally, they perform an experiment, testing the hypothesis in the real world. Depending on the results of the experiment, the model is either supported or rejected, and the scientist begins again.

It’s a handy picture. At the very least, it’s a good way to fill time in an introductory science course before teaching the actual science.

But science is a big area. And just as no two sports have the same league setup, no two areas of science use the same method. While the central principles behind the method still hold (the idea that predictions need to be made before experiments are performed, the idea that in order to test a model you need to know something it implies that other models don’t, the idea that the question of whether a model actually describes the real world should be answered by actual experiments…), the way they are applied varies depending on the science in question.

In particular, in high-energy particle physics, we do roughly follow the steps of the method: we propose models, we form hypotheses, and we test them out with experiments. We just don’t expect the same person to do each step!

In high energy physics, models are the domain of Theorists. Occasionally referred to as “pure theorists” to distinguish them from the next category, theorists manipulate theories (some intended to describe the real world, some not). “Manipulate” here can mean anything from modifying the principles of the theory to see what works, to attempting to use the theory to calculate some quantity or another, to proving that the theory has particular properties. There’s quite a lot to do, and most of it can happen without ever interacting with the other areas.

Hypotheses, meanwhile, are the province of Phenomenologists. While theorists often study theories that don’t describe the real world, phenomenologists focus on theories that can be tested. A phenomenologist’s job is to take a theory (either proposed by a theorist or another phenomenologist) and calculate its consequences for experiments. As new data comes in, phenomenologists work to revise their theories, computing just how plausible the old proposals are given the new information. While phenomenologists often work closely with those in the next category, they also do large amounts of work internally, honing calculation techniques and looking through models to find explanations for odd behavior in the data.

That data comes, ultimately, from Experimentalists. Experimentalists run the experiments. With experiments as large as the Large Hadron Collider, they don’t actually build the machines in question. Rather, experimentalists decide how the machines are to be run, then work to analyze the data that emerges. Data from a particle collider or a neutrino detector isn’t neatly labeled by particle. Rather, it involves a vast set of statistics, energies and charges observed in a variety of detectors. An experimentalist takes this data and figures out what particles the detectors actually observed, and from that what sorts of particles were likely produced. Like the other areas, much of this process is self-contained. Rather than being concerned with one theory or another, experimentalists will generally look for general signals that could support a variety of theories (for example, leptoquarks).

If experimentalists don’t build the colliders, who does? That’s actually the job of an entirely different class of scientists, the Accelerator Physicists. Accelerator physicists not only build particle accelerators, they study how to improve them, with research just as self-contained as the other groups.

So yes, we build models, form hypotheses, and construct and perform experiments to test them. And we’ve got very specialized, talented people who focus on each step. That means a lot of internal discussion, and many papers published that only belong to one step or another. For our subfield, it’s the best way we’ve found to get science done.

Nature Abhors a Constant

Why is a neutrino lighter than an electron? Why is the strong nuclear force so much stronger than the weak nuclear force, and why are both so much stronger than gravity? For that matter, why do any particles have the masses they do, or forces have the strengths they do?

To some people, these sorts of questions are meaningless. A scientist’s job is to find out the facts, to measure what the constants are. To ask why, though…why would you want to do that?

Maybe a sense of history?

See, physics has a history of taking what look like arbitrary facts (the orbits of the planets, the rate objects fall, the pattern of chemical elements) and finding out why they are that way. And there’s no reason not to expect this trend to continue.

The point can be made even more strongly: increasingly, it is becoming clear that nature abhors a constant.

To explain this, I first have to clarify what I mean by a constant. If you were asked to think of a constant, you’d probably think of the speed of light. The thing is, the speed of light is actually not the sort of constant I have in mind. The speed of light is three hundred million meters per second…but it’s also 671 million miles per hour, or one light year per year. Choose the right units, and the speed of light is just one. To go a bit further: the speed of light is merely an artifact of how we choose our units of distance and time, so it’s not a “real” constant at all!

So what would a “real” constant look like? Well, imagine if there were two fundamental speeds: a maximum, like the speed of light and a minimum, which nothing could go slower than. You could pick units so that one of the speeds was one, or so that the other was…but they couldn’t both be one at the same time. Their ratio stays the same, no matter what units you’re using. That’s the sign of a true constant. To say it another way: a “real” constant is dimensionless.

It is these “real” constants that nature so abhors, because whenever such a “real” constant appears to exist, it is likely to be due to a scalar field.

To remind readers, a scalar field is a type of quantum field consisting of a number that can vary through space. Temperature is an iconic illustration of a scalar field: at any given point you can define temperature by a number, and that number changes as you move from place to place.

Now constants, being constant, are not known for changing from place to place. Just because we don’t see mass or charge being different in different places, though, doesn’t mean they aren’t scalar fields.

To illustrate, imagine that you live far in the past, far enough that no-one knows that air has weight. Through careful experimentation, though, you can observe air pressure: everything is pressed upon in all directions by some mysterious force. Even if you don’t have access to mountains and therefore can’t see that air pressure varies by height, maybe you have begun to guess that air pressure is related to the weight of the air. You have a possible explanation for your constant pressure, in terms of a scalar pressure field. But how do you test your idea? Well, the big difference between a scalar and a constant is that a scalar can vary. Since there’s so much air above you, it’s hard to get air pressure to vary: you have to put enough energy in to the air to make it happen. More specifically, you vibrate the air: you create sound waves! By measuring how fast the sound waves go, you can test out your proposed number for the mass of the air, and if everything lines up right, you have successfully replaced a mysterious constant with a logical explanation.

This is almost exactly what happened with the Higgs. Scientists observed that particle masses seemed to be arbitrary numbers, and proposed a scalar field to explain them. (As a matter of fact, the masses involved actually cannot just be constants; the mathematics involved doesn’t allow it. They must be scalar fields). In order to test out the theory, we built the Large Hadron Collider, and used it to cause ripples in the seemingly constant masses, just like sound waves in air. In this case, those ripples were the Higgs particle, which served as evidence for the Higgs field just as sound waves serve as evidence for the mass of air.

And this sort of method keeps going. The Higgs explains mass in many cases, but it doesn’t explain the differences between particle masses, and it may be that new fields are needed to explain those. The same thing goes for the strengths of forces. Scalar fields are the most likely explanations for inflation, and in string theory scalars control the size and shape of the extra dimensions. So if you’ve got a mysterious constant, nature likely has a scalar field waiting in the wings to explain it.

What are colliders for, anyway?

Above is a thoroughly famous photo from ATLAS, one of six different particle detectors that sit around the ring of the Large Hadron Collider (or LHC for short). Forming a 26 kilometer ring spanning a chunk of southern France and Switzerland, the LHC is the biggest experiment of its kind, with the machine alone costing around 4 billion dollars.

But what is “its kind”? And why does it need to be so huge?

Aesthetics, clearly.

Explaining what a particle collider like the LHC does is actually fairly simple, if you’re prepared for some rather extreme mental images: using incredibly strong magnetic fields, the LHC accelerates protons until they’re moving at 99.9999991% of the speed of light, then lets them smash into each other in the middle of sophisticated detectors designed to observe and track everything that comes out of the collision.

That’s all well and awesome, but why do the protons need to be moving so fast? Are they really really hard to crack open, or something?

This gets at a common misunderstanding of particle physics, which I’d like to correct here.

When most people imagine what a particle collider does, they picture it smashing particles together like hollow shells, revealing the smaller particles trapped inside. You may have even heard particle colliders referred to as “atom smashers”, and if you’re used to hearing about scientists “splitting the atom”, this all makes sense: with lots of energy, atoms can be broken apart into protons and neutrons, which is what they are made of. Protons are made of quarks, and quarks were discovered using particle colliders, so the story seems to check out, right?

The thing is, lots of things have been discovered using particle colliders that definitely aren’t part of protons and neutrons. Relatives of the electron like muons and tau particles, new varieties of neutrinos, heavier quarks…pretty much the only particles that are part of protons or neutrons are the three lightest quarks (and that’s leaving aside the fact that what is or is not “part of” a proton is a complicated question in its own right).

So where do the extra particles come from? How do you crash two protons together and get something out that wasn’t in either of them?

You…throw Einstein at them?

E equals m c squared. This equation, famous to the point of cliché, is often misinterpreted. One useful way to think about it is that it describes mass as a type of energy, and clarifies how to convert between units of mass and units of energy. Then E in the equation is merely the contribution to the energy of a particle from its mass, while the full energy also includes kinetic energy, the energy of motion.

Energy is conserved, that is, cannot be created or destroyed. Mass, on the other hand, being merely one type of energy, is not necessarily conserved. The reason why mass seems to be conserved in day to day life is because it takes a huge amount of energy to make any appreciable mass: the c in m c squared is the speed of light, after all. That’s why if you’ve got a radioactive atom it will decay into lighter elements, never heavier ones.

However, this changes with enough kinetic energy. If you get something like a proton accelerated to up near the speed of light, its kinetic energy will be comparable to (or even much higher than) its mass. With that much “spare” energy, energy can transform from one form into another: from kinetic energy into mass!

Of course, it’s not quite that simple. Energy isn’t the only thing that’s conserved: so is charge, and not just electric charge, but other sorts of charge too, like the colors of quarks.  All in all, the sorts of particles that are allowed to be created are governed by the ways particles can interact. So you need not just one high energy particle, but two high energy particles interacting in order to discover new particles.

And that, in essence, is what a particle collider is all about. By sending two particles hurtling towards each other at almost the speed of light you are allowing two high energy particles to interact. The bigger the machine, the faster those particles can go, and thus the more kinetic energy is free to transform into mass. Thus the more powerful you make your particle collider, the more likely you are to see rare, highly massive particles that if left alone in nature would transform unseen into less massive particles in order to release their copious energy. By producing these massive particles inside a particle collider we can make sure they are created inside of sophisticated particle detectors, letting us observe what they turn into with precision and extrapolate what the original particles were. That’s how we found the Higgs, and it’s how we’re trying to find superpartners. It’s one of the only ways we have to answer questions about the fundamental rules that govern the universe.

What’s so hard about Quantum Field Theory anyway?

As I have mentioned before a theory in theoretical physics can be described as a list of quantum fields and the ways in which they interact. It turns out this is all you need to start drawing Feynman Diagrams.

Feynman Diagrams are tools physicists use to calculate the probability of things happening: radioactive particles decaying, protons colliding, electrons changing course in a magnetic field…basically anything small enough that quantum mechanics is important. Each Feynman Diagram depicts the paths that a group of particles take over time, interacting as they go. It’s important to remember, however, that Feynman Diagrams are not literally what’s going on: rather, they are tools for calculation.

To start making a Feynman Diagram, think about what you need present in order to start whatever process you’re investigating. For the examples given above, this means a radioactive particle, two protons, and an electron and a magnetic field, respectively. For each particle or field that you start out with, draw a line on the left of the diagram.

4gravincoming

If you’re making a Feynman Diagram you’re looking for a probability of some particular outcome. Draw lines corresponding to the particles and fields in that outcome on the left of the diagram. For example, if you were looking at a radioactive decay, you’d want the new particles the original particle decayed into. For an electron moving in a magnetic field, you want the electron’s new path.

4gravoutgoing

Now come the interactions. Each way that the particles and fields can interact is a potential way that lines can come together. For example, electrons are affected by the photons that make up electric and magnetic fields. Specifically, an electron can absorb a photon, changing its path. This gives us an interaction: an electron and a photon go in, and an electron comes out.

4gravinteraction

You’ve got the basic building blocks: particles as lines, and interactions where the lines come together. Now, just link them all up! Something like this:

4gravclassical

Then again, you could also do it like this:

4gravanom

Or this:

4grav2loop

Or this:

4gravcomplicated

You get the idea. To use these diagrams, a physicist assigns a number to each line and each interaction, depending on various traits of the particles involved including their energy and angles of travel. For each diagram, all these numbers are multiplied together. Then, because in quantum mechanics every possible event has to be included, you add up all the numbers from all of the diagrams. Every single one.

Not just the simple diagram on the top, but also the more complicated one below it, and the one below that, and every way you could possibly link up all of the particles going in and coming out, each more and more complicated. An infinite list of diagrams. Only by adding all of those diagrams together can a physicist find the true, complete probability of a quantum event.

Adding an infinite set of increasingly complicated diagrams is tricky. By tricky, I mean nearly absolutely impossible and so insane in principle that mathematicians aren’t even sure that it has any real meaning.

Because of this, everything that physicists calculate is an approximation. This approximation is possible because each interaction multiplies the total for a diagram by a “small” number, which gets smaller the weaker the force involved, from around 1/2 for the strong nuclear force to about 1/12 for electricity and magnetism. If you limit the number of points of interaction, you limit the number of possible diagrams. For our example, limiting things to one point of interaction gives only the first diagram. If you allow up to three points, you get the second diagram, and so on. Each time you add two more interactions, your diagram gets another loop, and the contribution to the total is smaller, so that even just four loops with a force as weak as electricity and magnetism gets you all but a billionth of the total, which is about as accurate as the experiments are anyway.

What this means, though, is that we’re only at the very edge of a vast ocean of knowledge. We know the rules, the laws of physics if you will, but we can only tiptoe loop by loop towards the full formulas, sitting infinitely far away.

That, in essence, is what I work on. I look for patterns in the numbers, tricks in the calculation, ways to yank ourselves up by our bootstraps to higher and higher loops, and maybe, just maybe, for a shortcut up to infinity.

Because just because we know the rules, doesn’t mean we know how the game is played.

That’s Quantum Field Theory.

Supersymmetry, to the Rescue!

Part Three of a Series on N=4 Super Yang-Mills Theory

This is the third in a series of articles that will explain N=4 super Yang-Mills theory. In this series I take that phrase apart bit by bit, explaining as I go. Because I’m perverse and out to confuse you, I started with the last bit here, and now I’m working my way up.

N=4 Super Yang-Mills Theory

Ah, supersymmetry…trendy, sexy, mysterious…an excuse to put “super” in front of words…it’s a grand subject.

If I’m going to manage to explain supersymmetry at all, then I need to explain spin. Luckily, you don’t need to know much about spin for this to work. While I could start telling you about how particles literally spin around like tops despite having a radius of zero, and how quantum mechanics restricts how fast they spin to a few particular values measured by Planck’s constant…all you really need to know is the following:

Spin is a way to categorize particles.

In particular, there are:
Spin 1: Yang-Mills fields are spin 1, carrying forces with a direction and strength.
Spin ½: This spin covers pretty much all of the particles you encounter in everyday matter: electrons, neutrons, and protons, as well as more exotic stuff like neutrinos. If you want to make large-scale, interesting structures like rocks or lifeforms you pretty much need spin ½ particles.
Spin 0: A spin zero field (also called a scalar) is a number, like a temperature, that can vary from place to place. The Higgs field is an example of a spin zero field, where the number is part of the mass of other particles, and the Higgs boson is a ripple in that field, like a cold snap would be for temperature.

While they aren’t important for this post, you can also have higher numbers for spin: gravity has spin 2, for example.

With this definition in hand, we can start talking about supersymmetry, which is also pretty straightforward if you ignore all of the actual details.

Supersymmetry is a relationship (or symmetry) between particles with spin X, and particles with spin X-½

For example, you could have a relationship between a spin 1 Yang-Mills field and a spin ½ matter particle, or between a spin ½ matter particle and a spin 0 scalar.

“Relationship” is a vague term here, much like it is in romance, and just like in romance you’d do well to clarify precisely what you mean by it. Here, it means something like the following: if you switch a particle for its “superpartner” (the other particle in the relationship) then the physics should remain the same. This has two important consequences: superpartners have the same mass as each-other and superpartners have the same interactions as each-other.

The second consequence means that if a particle has electric charge -1, its superpartner also has electric charge -1. If you’ve got gluons, each with a color and an anti-color, then their superpartners will also have both a color and an anti-color. Astute readers will have remembered that quarks just have a color or an anti-color, and realized the implication: quarks cannot be the superpartners of gluons.

Other, even more well-informed readers will be wondering about the first consequence. Such readers might have heard that the LHC is looking for superpartners, or that superpartners could explain dark matter, and that in either case superpartners have very high mass. How can this be if superpartners have to have the same mass as their partners among the regular particles?

The important point to make here is that our real world is not supersymmetric, even if superpartners are discovered at the LHC, because supersymmetry is broken. In physics, when a symmetry of any sort is broken it’s like a broken mirror: it no longer is the same on each side, but the two sides are still related in a systematic way. Broken supersymmetry means that particles that would be superpartners can have different masses, but they will still have the same interactions.

When people look for supersymmetry at the LHC, they’re looking for new particles with the same interactions as the old particles, but generally much higher mass. When I talk about supersymmetry, though, I’m talking about unbroken supersymmetry: pairs of particles with the same interactions and the same mass. And N=4 super Yang-Mills is full of them.

How full? N=4 full. And that’s next week’s topic.