Tag Archives: particle physics

What’s an Amplitude? Just about everything.

I am an Amplitudeologist. In other words, I study scattering amplitudes. I’ve explained bits and pieces of what scattering amplitudes are in other posts, but I ought to give a short definition here so everyone’s on the same page:

A scattering amplitude is the formula used to calculate the probability that some collection of particles will “scatter”, emerging as some (possibly different) collection of particles.

Note that I’m using some weasel words here. The scattering amplitude is not a probability itself, but “the formula used to calculate the probability”. For those familiar with the mathematics of waves, the scattering amplitude gives the amplitude of a “probability wave” that must be squared to get the probability. (Those familiar with waves might also ask: “If this is the amplitude, what about the period?” The truth is that because scattering amplitudes are calculated using complex numbers, what we call the “amplitude” also contains information about the wave’s “period”. It may seem like an inconsistent way to name things from the perspective of a beginning student, but it is actually consistent with the terminology in a large chunk of physics.)

In some of the simplest scattering amplitudes particles literally “scatter”, with two particles “colliding” and emerging traveling in different directions.

A scattering amplitude can also describe a more complicated situation, though. At particle colliders like the Large Hadron Collider, two particles (a pair of protons for the LHC) are accelerated fast enough that when they collide they release a whole slew of new particles. Since it still fits the “some particles go in, some particles go out” template, this is still described by a scattering amplitude.

It goes even further than that, though, because “some particles” could also just be “one particle”. If you’re dealing with something unstable (the particle equivalent of radioactive, essentially) then one particle can decay into two or more particles. There’s a whole slew of questions that require that sort of calculation. For example, if unstable particles were produced in the early universe, how many of them would be left around today? If dark matter is unstable (and some possible candidates are), when it decays it might release particles we could detect. In general, this sort of scattering amplitude is often of interest to astrophysicists when they happen to get involved in particle physics.

You can even use scattering amplitudes to describe situations that, at first glance, don’t sound like collisions of particles at all. If you want to find the effect of a magnetic field on an electron to high accuracy, the calculation also involves a scattering amplitude. A magnetic field can be thought of in terms of photons, particles of light, because light is a vibration in the electro-magnetic field. This means that the effect of a magnetic field on an electron can be calculated by “scattering” an electron and a photon.

4gravanom

If this looks familiar, check the handbook section.

In fact, doing the calculation in this way leads to what is possibly the most accurately predicted number in all of science.

Scattering amplitudes show up all over the place, from particle physics at the Large Hadron Collider to astrophysics to delicate experiments on electrons in magnetic fields. That said, there are plenty of things people calculate in theoretical physics that don’t use scattering amplitudes, either because they involve questions that are difficult to answer from the scattering amplitude point of view, or because they invoke different formulas altogether. Still, scattering amplitudes are central to the work of a large number of physicists. They really do cover just about everything.

“China” plans super collider

When I saw the headline, I was excited.

“China plans super collider” says Nature News.

There’s been a lot of worry about what may happen if the Large Hadron Collider finishes its run without discovering anything truly new. If that happens, finding new particles might require a much bigger machine…and since even that machine has no guarantee of finding anything at all, world governments may be understandably reluctant to fund it.

As such, several prominent people in the physics community have put their hopes on China. The country’s somewhat autocratic nature means that getting funding for a collider is a matter of convincing a few powerful people, not a whole fractious gaggle of legislators. It’s a cynical choice, but if it keeps the field alive so be it.

If China was planning a super collider, then, that would be great news!

Too bad it’s not.

Buried eight paragraphs in to Nature’s article we find the following:

The Chinese government is yet to agree on any funding, but growing economic confidence in the country has led its scientists to believe that the political climate is ripe, says Nick Walker, an accelerator physicist at DESY, Germany’s high-energy physics laboratory in Hamburg. Although some technical issues remain, such as keeping down the power demands of an energy-hungry ring, none are major, he adds.

The Chinese government is yet to agree on any funding. China, if by China you mean the Chinese government, is not planning a super collider.

So who is?

Someone must have drawn these diagrams, after all.

Reading the article, the most obvious answer is Beijing’s Institute of High Energy Physics (IHEP). While this is true, the article leaves out any mention of a more recently founded site, the Center for Future High Energy Physics (CFHEP).

This is a bit odd, given that CFHEP’s whole purpose is to compose a plan for the next generation of colliders, and persuade China’s government to implement it. They were founded, with heavy involvement from non-Chinese physicists including their director Nima Arkani-Hamed, with that express purpose in mind. And since several of the quotes in the article come from Yifang Wang, director of IHEP and member of the advisory board of CFHEP, it’s highly unlikely that this isn’t CFHEP’s plan.

So what’s going on here? On one level, it could be a problem on the journalists’ side. News editors love to rewrite headlines to be more misleading and click-bait-y, and claiming that China is definitely going to build a collider draws much more attention than pointing out the plans of a specialized think tank. I hope that it’s just something like that, and not the sort of casual racism that likes to think of China as a single united will. Similarly, I hope that the journalists involved just didn’t dig deep enough to hear about CFHEP, or left it out to simplify things, because there is a somewhat darker alternative.

CFHEP’s goal is to convince the Chinese government to build a collider, and what better way to do that than to present them with a fait accompli? If the public thinks that this is “China’s” plan, that wheels are already in motion, wouldn’t it benefit the Chinese government to play along? Throw in a few sweet words about the merits of international collaboration (a big part of the strategy of CFHEP is to bring international scientists to China to show the sort of community a collider could attract) and you’ve got a winning argument, or at least enough plausibility to get US and European funding agencies in a competitive mood.

This…is probably more cynical than what’s actually going on. For one, I don’t even know whether this sort of tactic would work.

Do these guys look like devious manipulators?

Indeed, it might just be a journalistic omission, part of a wider tendency of science journalists to focus on big projects and ignore the interesting part, the nitty-gritty things that people do to push them forward. It’s a shame, because people are what drive the news forward, and as long as science is viewed as something apart from real human beings people are going to continue to mistrust and misunderstand it.

Either way, one thing is clear. The public deserves to hear a lot more about CFHEP.

Look what I made!

In a few weeks, I’ll be giving a talk for Stony Brook’s Graduate Awards Colloquium, to an audience of social science grad students and their parents.

One of the most useful tools when talking to people in other fields is a shared image. You want something from your field that they’ve seen, that they’re used to, that they’ll recognize. Building off of that kind of thing can be a great way to communicate.

If there’s one particle physics image that lots and lots of people have seen, it’s the Standard Model. Generally, it’s organized into charts like this:

Standard_Model_of_Elementary_Particles

I thought that if people saw a chart like that, but for N=4 super Yang-Mills, it might make the theory seem a bit more familiar. N=4 super Yang-Mills has a particle much like the Standard Model’s gluon with spin 1, paired with four gluinos, particles that are sort of but not really like quarks with spin 1/2, and six scalars, particles whose closest analogue in the Standard Model is the Higgs with spin 0.

In N=4 super Yang-Mills, none of these particles have any mass, since if supersymmetry isn’t “broken” all particles have the same mass. So where mass is written in the Standard Model table, I can just put zero. The table I linked also gives the electric charge of each particle. That doesn’t really mean anything for N=4 super Yang-Mills. It isn’t a theory that tries to describe the real world, so there’s no direct equivalent to a real-world force like electromagnetism. Since everything in the theory has to have the same charge, again due to supersymmetry, I can just list all of their “electric charges” as zero.

Putting it all together, I get the diagram below. The theory has eleven particles in total, so it won’t fit into a nice neat square. Still, this should be more familiar than most of the ways I could present things.

N4SYMParticleContent

Particles are not Species

It has been estimated that there are 7.5 million undiscovered species of animals, plants and fungi. Most of these species are insects. If someone wanted billions of dollars to search the Amazon rainforest with the goal of cataloging every species of insect, you’d want them to have a pretty good reason. Maybe they are searching for genes that could cure diseases, or trying to understand why an ecosystem is dying.

The primary goal of the Large Hadron Collider is to search for new subatomic particles. If we’re spending billions searching for these things, they must have some use, right? After all, it’s all well and good knowing about a bunch of different particles, but there must be a whole lot of sorts of particles out there, at least if you judge by science fiction (these two are also relevant). Surely we could just focus on finding the useful ones, and ignore the rest?

The thing is, particle physics isn’t like that. Particles aren’t like insects, you don’t find rare new types scattered in out-of-the-way locations. That’s because each type of particle isn’t like a species of animal. Instead, each particle is a fundamental law of nature.

Move over Linnaeus.

Move over Linnaeus.

It wasn’t always like this. In the late 50’s and early 60’s, particle accelerators were producing a zoo of new particles with no clear rhyme or reason, and it looked like they would just keep producing more. That impression changed when Murray Gell-Mann proposed his Eightfold Way, which led to the development of the quark model. He explained the mess of new particles in terms of a few fundamental particles, the quarks, which made up the more complicated particles that were being discovered.

Nowadays, the particles that we’re trying to discover aren’t, for the most part, the zoo of particles of yesteryear. Instead, we’re looking for new fundamental particles.

What makes a particle fundamental?

The new particles of the early 60’s were a direct consequence of the existence of quarks. Once you understood how quarks worked, you could calculate the properties of all of the new particles, and even predict ones that hadn’t been found yet.

By contrast, fundamental particles aren’t based on any other particles, and you can’t predict everything about them. When we discover a new fundamental particle like the Higgs boson, we’re discovering a new, independent law of nature. Each fundamental particle is a law that states, across all of space and time, “if this happens, make this particle”. It’s a law that holds true always and everywhere, regardless of how often the particle is actually produced.

Think about the laws of physics like the cockpit of a plane. In front of the pilot is a whole mess of controls, dials and switches and buttons. Some of those controls are used every flight, some much more rarely. There are probably buttons on that plane that have never been used. But if a single button is out of order, the plane can’t take off.

Each fundamental particle is like a button on that plane. Some turn “on” all the time, while some only turn “on” in special circumstances. But each button is there all the same, and if you’re missing one, your theory is incomplete. It may agree with experiments now, but eventually you’re going to run into problems of one sort or another that make your theory inconsistent.

The point of discovering new particles isn’t just to find the one that will give us time travel or let us blow up Vulcan. Technological applications would be nice, but the real point is deeper: we want to know how reality works, and for every new fundamental particle we discover, we’ve found out a fact that’s true about the whole universe.

The Four Ways Physicists Name Things

If you’re a biologist and you discover a new animal, you’ve always got Latin to fall back on. If you’re an astronomer, you can describe what you see. But if you’re a physicist, your only option appears to involve falling back on one of a few terrible habits.

The most reasonable option is just to name it after a person. Yang-Mills and the Higgs Boson may sound silly at first, but once you know the stories of C. N. Yang, Robert Mills, Peter Higgs and Satyendra Nath Bose you start appreciating what the names mean. While this is usually the most elegant option, the increasingly collaborative nature of physics means that many things have to be named with a series of initials, like ABJM, BCJ and KKLT.

A bit worse is the tendency to just give it the laziest name possible. What do you call the particles that “glue” protons and neutrons together? Why gluons, of course, yuk yuk yuk!

This is particularly common when it comes to supersymmetry, where putting the word “super” in front of something almost always works. If that fails, it’s time to go for more specific conventions: to find the partner of an existing particle, if the new particle is a boson, just add “s-” for “super”“scalar” apparently to the name. This creates perfectly respectable names like stau, sneutrino, and selectron. If the new particle is a fermion, instead you add “-ino” to the end, getting something like a gluino if you start with a gluon. If you’ve heard of neutrinos, you may know that neutrino means “little neutral one”. You might perfectly rationally expect that gluino means “little gluon”, if you had any belief that physicists name things logically. We don’t. A gluino is called a gluino because it’s a fermion, and neutrinos are fermions, and the physicists who named it were too lazy to check what “neutrino” actually means.

Pictured: the superpartner of Nidoran?

Worse still are names that are obscure references and bad jokes. These are mercifully rare, and at least memorable when they occur. In quantum mechanics, you write down probabilities using brackets of two quantum states, \langle a | b\rangle. What if you need to separate the two states, \langle a| and |b\rangle? Then you’ve got a “bra” and a “ket”!

Or have you heard the story of how quarks were named? Quarks, for those of you unfamiliar with them, are found in protons and neutrons in groups of three. Murray Gell-Mann, one of the two people who first proposed the existence of quarks, got their name from Finnegan’s Wake, a novel by James Joyce, which at one point calls for “Three quarks for Muster Mark!” While this may at first sound like a heartwarming tale of respect for the literary classics, it should be kept in mind that a) Finnegan’s Wake is a novel composed almost entirely of gibberish, read almost exclusively by people who pretend to understand it to seem intelligent and b) this isn’t exactly the most important or memorable line in the book. So Gell-Mann wasn’t so much paying homage to a timeless work of literature as he was referencing the most mind-numbingly obscure piece of nerd trivia before the invention of Mara Jade. Luckily these days we have better ways to remember the name.

Albeit wrinklier ways.

The final, worst category, though, don’t even have good stories going for them. They are the names that tell you absolutely nothing about the thing they are naming.

Probably the worst examples of this from my experience are the a-theorem and the c-theorem. In both cases, a theory happened to have a parameter in it labeled by a letter. When a theorem was proven about that parameter, rather than giving it a name that told you anything at all about what it was, people just called it by the name of the parameter. Mathematics is full of names like this too. Without checking Wikipedia, what’s the difference between a set, a group, and a category? What the heck is a scheme?

If you ever have to name something, be safe and name it after a person. If you don’t, just try to avoid falling into these bad habits of physics naming.

A Wild Infinity Appears! Or, Renormalization

Back when Numberphile’s silly video about the zeta function came up, I wrote a post explaining the process of regularization, where physicists take an incorrect infinite result and patch it over to get something finite. At the end of that post I mentioned a particular variant of regularization, called renormalization, which was especially important in quantum field theory.

Renormalization has to do with how we do calculations and make predictions in particle physics. If you haven’t read my post “What’s so hard about Quantum Field Theory anyway?” you should read it before trying to tackle this one. The important concepts there are that probabilities in particle physics are calculated using Feynman Diagrams, that those diagrams consist of lines representing particles and points representing the ways they interact, that each line and point in the diagram gives a number that must be plugged in to the calculation, and that to do the full calculation you have to add up all the possible diagrams you can draw.

Let’s say you’re interested in finding out the mass of a particle. How about the Higgs?

You can’t weigh it, or otherwise see how gravity affects it: it’s much too light, and decays into other particles much too fast. Luckily, there is another way. As I mentioned in this post, a particle’s mass and its kinetic energy (energy of motion) both contribute to its total energy, which in turn affects what particles it can turn into if it decays. So if you want to find a particle’s mass, you need the relationship between its motion and its energy.

Suppose we’ve got a Higgs particle moving along. We know it was created out of some collision, and we know what it decays into at the end. With that, we can figure out its mass.

higgstree

There’s a problem here, though: we only know what happens at the beginning and the end of this diagram. We can’t be certain what happens in the middle. That means we need to add in all of the other diagrams, every possible diagram with that beginning and that end.

Just to look at one example, suppose the Higgs particle splits into a quark and an anti-quark (the antimatter version of the quark). If they come back together later into a Higgs, the process would look the same from the outside. Here’s the diagram for it:

higgsloop

When we’re “measuring the Higgs mass”, what we’re actually measuring is the sum of every single diagram that begins with the creation of a Higgs and ends with it decaying.

Surprisingly, that’s not the problem!

The problem comes when you try to calculate the number that comes out of that diagram, when the Higgs splits into a quark-antiquark pair. According to the rules of quantum field theory, those quarks don’t have to obey the normal relationship between total energy, kinetic energy, and mass. They can have any kinetic energy at all, from zero all the way up to infinity. And because it’s quantum field theory, you have to add up all of those possible kinetic energies, all the way up. In this case, the diagram actually gives you infinity.

(Note that not every diagram with unlimited kinetic energy is going to be infinite. The first time theorists calculated infinite diagrams, they were surprised.

For those of you who know calculus, the problem here comes after you integrate over momentum. The two quarks each give a factor of one over the momentum, and then you integrate the result four times (for three dimensions of space plus time), which gives an infinite result. If you had different particles arranged in a different way you might divide by more factors of momentum and get a finite value.)

The modern understanding of infinite results like this is that they arise from our ignorance. The mass of the Higgs isn’t actually infinity, because we can’t just add up every kinetic energy up to infinity. Instead, at some point before we get to infinity “something else” happens.

We don’t know what that “something else” is. It might be supersymmetry, it might be something else altogether. Whatever it is, we don’t know enough about it now to include it in the calculations as anything more than a cutoff, a point beyond which “something” happens. A theory with a cutoff like this, one that is only “effective” below a certain energy, is called an Effective Field Theory.

While we don’t know what happens at higher energies, we still need a way to complete our calculations if we want to use them in the real world. That’s where renormalization comes in.

When we use renormalization, we bring in experimental observations. We know that, no matter what is contributing to the Higgs particle’s mass, what we observe in the real world is finite. “Something” must be canceling the divergence, so we simply assume that “something” does, and that the final result agrees with the experiment!

"Something"

“Something”

In order to do this, we accepted the experimental result for the mass of the Higgs. That means that we’ve lost any ability to predict the mass from our theory. This is a general rule for renormalization: we trade ignorance (of the “something” that happens at high energy) for a loss of predictability.

If we had to do this for every calculation, we couldn’t predict anything at all. Luckily, for many theories (called renormalizable theories) there are theorems proving that you only need to do this a few times to fix the entire theory. You give up the ability to predict the results of a few experiments, but you gain the ability to predict the rest.

Luckily for us, the Standard Model is a renormalizable theory. Unfortunately, some important theories are not. In particular, quantum gravity is non-renormalizable. In order to fix the infinities in quantum gravity, you need to do the renormalization trick an infinite number of times, losing an infinite amount of predictability. Thus, while making a theory of quantum gravity is not difficult in principle, in practice the most obvious way to create the theory results in a “theory” that can never make any predictions.

One of the biggest virtues of string theory (some would say its greatest virtue) is that these infinities never appear. You never need to renormalize string theory in this way, which is what lets it work as a theory of quantum gravity. N=8 supergravity, the gravity cousin of N=4 super Yang-Mills, might also have this handy property, which is why many people are so eager to study it.

Amplitudes on Paperscape

Paperscape is a very cool tool developed by Damien George and Rob Knegjens. It analyzes papers from arXiv, the paper repository where almost all physics and math papers live these days. By putting papers that cite each other closer together and pushing papers that don’t cite each other further apart, Paperscape creates a map of all the papers on arXiv, arranged into “continents” based on the links between them. Papers with more citations are shown larger, newer papers are shown brighter, and subject categories are indicated by color-coding.

Here’s a zoomed-out view:

PaperscapeFullMap

Already you can see several distinct continents, corresponding to different arXiv categories like high energy theory and astrophysics.

If you want to find amplitudes on this map, just zoom in between the purple continent (high energy theory, much of which is string theory) and the green one (high energy lattice, nuclear experiment, high energy experiment, and high energy phenomenology, broadly speaking these are all particle physics).

PaperscapeAmplitudesMap

When you zoom in, Paperscape shows words that commonly appear in a given region of papers. Zoomed in this far, you can see amplitudes!

Amplitudeologists like me live on an island between particle physics and string theory. We’re connected on both sides by bridges of citations and shared terms, linking us to people who study quarks and gluons on one side to people who study strings and geometry on the other. Think of us like Manhattan, an island between two shores, densely networked in to the surroundings.

PaperscapeZoomedMap

Zoom in further, and you can see common keywords for individual papers. Exploring around here shows not only what is getting talked about, but what sort of subjects as well. You can see by the color-coding that many papers in amplitudes are published as hep-th, or high energy theory, but there’s a fair number of papers from hep-ph (phenomenology) and from nuclear physics as well.

There’s a lot of interesting things you can do with Paperscape. You can search for individuals, or look at individual papers, seeing who they cite and who cite them. Try it out!

High Energy? What does that mean?

I am a high energy physicist who uses the high energy and low energy limits of a theory that, while valid up to high energies, is also a low-energy description of what at high energies ends up being string theory (string theorists, of course, being high energy physicists as well).

If all of that makes no sense to you, congratulations, you’ve stumbled upon one of the worst-kept secrets of theoretical physics: we really could use a thesaurus.

“High energy” means different things in different parts of physics. In general, “high” versus “low” energy classifies what sort of physics you look at. “High” energy physics corresponds to the very small, while “low” energies encompass larger structures. Many people explain this via quantum mechanics: the uncertainty principle says that the more certain you are of a particle’s position, the less certain you can be of how fast it is going, which would imply that a particle that is highly restricted in location might have very high energy. You can also understand it without quantum mechanics, though: if two things are held close together, it generally has to be by a powerful force, so the bond between them will contain more energy. Another perspective is in terms of light. Physicists will occasionally use “IR”, or infrared, to mean “low energy” and “UV”, or ultraviolet, to mean “high energy”. Infrared light has long wavelengths and low energy photons, while ultraviolet light has short wavelengths and high energy photons, so the analogy is apt. However, the analogy only goes so far, since “UV physics” is often at energies much greater than those of UV light (and the same sort of situation applies for IR).

So what does “low energy” or “high energy” mean? Well…

The IR limit: Lowest of the “low energy” points, this refers to the limit of infinitely low energy. While you might compare it to “absolute zero”, really it just refers to energy that’s so low that compared to the other energies you’re calculating with it might as well be zero. This is the “low energy limit” I mentioned in the opening sentence.

Low energy physics: Not “high energy physics”. Low energy physics covers everything from absolute zero up to atoms. Once you get up to high enough energy to break up the nucleus of an atom, you enter…

High energy physics: Also known as “particle physics”, high energy physics refers to the study of the subatomic realm, which also includes objects which aren’t technically particles like strings and “branes”. If you exclude nuclear physics itself, high energy physics generally refers to energies of a mega-electron-volt and up. For comparison, the electrons in atoms are bound by energies of around an electron-volt, which is the characteristic energy of chemistry, so high energy physics is at least a million times more energetic. That said, high energy physicists are often interested in low energy consequences of their theories, including all the way down to the IR limit. Interestingly, by this point we’ve already passed both infrared light (from a thousandth of an electron-volt to a single electron volt) and ultraviolet light (several electron-volts to a hundred or so). Compared to UV light, mega-electron volt scale physics is quite high energy.

The TeV scale: If you’re operating a collider though, mega-electron-volts (or MeV) are low-energy physics. Often, calculations for colliders will assume that quarks, whose masses are around the MeV scale, actually have no mass at all! Instead, high energy for particle colliders means giga (billion) or tera (trillion) electron volt processes. The LHC, for example, operates at around 7 TeV now, with 14 TeV planned. This is the range of scales where many had hoped to see supersymmetry, but as time has gone on results have pushed speculation up to higher and higher energies. Of course, these are all still low energy from the perspective of…

The string scale: Strings are flexible, but under enormous tension that keeps them very very short. Typically, strings are posed to be of length close to the Planck length, the characteristic length at which quantum effects become relevant for gravity. This enormously small length corresponds to the enormously large Planck energy, which is on the order of 1028 electron-volts. That’s about ten to the sixteen times the energies of the particles at the LHC, or ten to the twenty-two times the MeV scale that I called “high energy” earlier. For comparison, there are about ten to the twenty-two atoms in a milliliter of water. When extra dimensions in string theory are curled up, they’re usually curled up at this scale. This means that from a string theory perspective, going to the TeV scale means ignoring the high energy physics and focusing on low energy consequences, which is why even the highest mass supersymmetric particles are thought of as low energy physics when approached from string theory.

The UV limit: Much as the IR limit is that of infinitely low energy, the UV limit is the formal limit of infinitely high energy. Again, it’s not so much an actual destination, as a comparative point where the energy you’re considering is much higher than the energy of anything else in your calculation.

These are the definitions of “high energy” and “low energy”, “UV” and “IR” that one encounters most often in theoretical particle physics and string theory. Other parts of physics have their own idea of what constitutes high or low energy, and I encourage you to ask people who study those parts of physics if you’re curious.

Planar vs. Non-Planar: A Colorful Story

Last week, I used two terms, planar theory and non-planar theory, without defining them. This week, I’m going to explain what they mean, and why they’re important.

Suppose you’re working with a Yang-Mills theory (not necessarily N=4 super Yang-Mills. To show you the difference between planar and non-planar, I’ll draw some two-loop Feynman diagrams for a process where two particles go in and two particles come out:

planarity1

The diagram on your left is planar, while the diagram on your right is non-planar. The diagram on the left can be written entirely on a flat page (or screen), with no tricks. By contrast, with the diagram on the right I have to cheat and make one of the particle lines jump over another one (that’s what the arrow is meant to show). Try as you might, you can’t twist that diagram so that it lies flat on a plane (at least not while keeping the same particles going in and out). That’s the difference between planar and non-planar.

Now, what does it mean for a theory to be planar or non-planar?

Let’s review some facts about Yang-Mills theories. (For a more detailed explanation, see here). In Yang-Mills there are a certain number of colors, where each one works a bit like a different kind of electric charge. The strong force, the force that holds protons and neutrons together, has three colors, usually referred to as red, blue, and green (this is of course just jargon, not the literal color of the particles).

Forces give rise to particles. In the case of the strong force, those particles are called gluons. Each gluon has a color and an anti-color, where you can think of the color like a positive charge and the anti-color like a negative charge. A given gluon might be red-antiblue, or green-antired, or even red-antired.

While the strong force has three colors, for this article it will be convenient to pretend that there are four: red, green, blue, and yellow.

An important principle of Yang-Mills theories is that color must be conserved. Since anti-colors are like negative colors, they can cancel normal colors out. So if you’ve got a red-antiblue gluon that collides with a blue-antigreen gluon, the blue and antiblue can cancel each other out, and you can end up with, for example, red-antiyellow and yellow-antigreen instead.

Let’s consider that process in particular. There are lots of Feynman diagrams you can draw for it, let’s draw one of the simplest ones first:

planarity2

The diagram on the left just shows the process in terms of the particles involved: two gluons go in, two come out.

The other diagram takes into account conservation of colors. The red from the red-antiblue gluon becomes the red in the red-antiyellow gluon on the other side. The antiblue instead goes down and meets the blue from the blue-antigreen gluon, and both vanish in the middle, cancelling each other out. It’s as if the blue color entered the diagram, then turned around backwards and left it again. (If you’ve ever heard someone make the crazy-sounding claim that antimatter is normal matter going backwards in time, this is roughly what they mean.)

From this diagram, we can start observing a general principle: to make sure that color is conserved, each line must have only one color.

Now let’s try to apply this principle to the two-loop diagrams from the beginning of the article. If you draw double lines like we did in the last example, fill in the colors, and work things out, this is what you get:

planarity3

What’s going on here?

In the diagram on the left, you see the same lines as the earlier diagram on the outside. On the inside, though, I’ve drawn two loops of color, purple and pink.

I drew the lines that way because, just based on the external lines, you don’t know what color they should be. They could be red, or yellow, or green, or blue. Nothing tells you which one is right, so all of them are possible.

Remember that for Feynman diagrams, we need to add up every diagram we can draw to get the final result. That means that there are actually four times four or sixteen copies of this diagram, each one with different colors in the loops.

Now let’s look at the other diagram. Like the first one, it’s a diagram with two loops. However, in this case, the inside of both loops is blue. If you like, you can try to trace out the lines in the loops. You’ll find that they’re all connected together. Because this diagram is non-planar, color conservation fixes the color in the loops.

So while there are sixteen copies of the first diagram, there is only one possible version of the second one. Since you add all the diagrams together, that means that the first diagram is sixteen times more important than the second diagram.

Now suppose we had more than four colors. Lots more.

More than that…

With ten colors, the planar diagrams are a hundred times more important. With a hundred colors, they are ten thousand times more important. Keep increasing the number of colors, and it gets to the point where you can honestly say that the non-planar diagrams don’t matter at all.

What, then, is a “planar theory”?

A planar theory is a theory with a very large (infinite) number of colors.

In a planar theory, you can ignore the non-planar diagrams and focus only on the planar ones.

Nima Arkani-Hamed’s Amplituhedron method applies to the planar version of N=4 super Yang-Mills. There is a lot of progress on the planar version of the theory, and it is because the restriction to planar diagrams makes things simpler.

However, sometimes you need to go beyond planar diagrams. There are relationships between planar and non-planar diagrams, based on the ways that you can pair different colors together in the theory. Fully understanding this relationship is powerful for understanding Yang-Mills theory, but, as it turns out, it’s also the key to relating Yang-Mills theory to gravity! But that’s a story for another post.

Model-Hypothesis-Experiment: Sure, Just Not All the Same Person!

At some point, we were all taught how science works.

The scientific method gets described differently in different contexts, but it goes something like this:

First, a scientist proposes a model, a potential explanation for how something out in the world works. They then create a hypothesis, predicting some unobserved behavior that their model implies should exist. Finally, they perform an experiment, testing the hypothesis in the real world. Depending on the results of the experiment, the model is either supported or rejected, and the scientist begins again.

It’s a handy picture. At the very least, it’s a good way to fill time in an introductory science course before teaching the actual science.

But science is a big area. And just as no two sports have the same league setup, no two areas of science use the same method. While the central principles behind the method still hold (the idea that predictions need to be made before experiments are performed, the idea that in order to test a model you need to know something it implies that other models don’t, the idea that the question of whether a model actually describes the real world should be answered by actual experiments…), the way they are applied varies depending on the science in question.

In particular, in high-energy particle physics, we do roughly follow the steps of the method: we propose models, we form hypotheses, and we test them out with experiments. We just don’t expect the same person to do each step!

In high energy physics, models are the domain of Theorists. Occasionally referred to as “pure theorists” to distinguish them from the next category, theorists manipulate theories (some intended to describe the real world, some not). “Manipulate” here can mean anything from modifying the principles of the theory to see what works, to attempting to use the theory to calculate some quantity or another, to proving that the theory has particular properties. There’s quite a lot to do, and most of it can happen without ever interacting with the other areas.

Hypotheses, meanwhile, are the province of Phenomenologists. While theorists often study theories that don’t describe the real world, phenomenologists focus on theories that can be tested. A phenomenologist’s job is to take a theory (either proposed by a theorist or another phenomenologist) and calculate its consequences for experiments. As new data comes in, phenomenologists work to revise their theories, computing just how plausible the old proposals are given the new information. While phenomenologists often work closely with those in the next category, they also do large amounts of work internally, honing calculation techniques and looking through models to find explanations for odd behavior in the data.

That data comes, ultimately, from Experimentalists. Experimentalists run the experiments. With experiments as large as the Large Hadron Collider, they don’t actually build the machines in question. Rather, experimentalists decide how the machines are to be run, then work to analyze the data that emerges. Data from a particle collider or a neutrino detector isn’t neatly labeled by particle. Rather, it involves a vast set of statistics, energies and charges observed in a variety of detectors. An experimentalist takes this data and figures out what particles the detectors actually observed, and from that what sorts of particles were likely produced. Like the other areas, much of this process is self-contained. Rather than being concerned with one theory or another, experimentalists will generally look for general signals that could support a variety of theories (for example, leptoquarks).

If experimentalists don’t build the colliders, who does? That’s actually the job of an entirely different class of scientists, the Accelerator Physicists. Accelerator physicists not only build particle accelerators, they study how to improve them, with research just as self-contained as the other groups.

So yes, we build models, form hypotheses, and construct and perform experiments to test them. And we’ve got very specialized, talented people who focus on each step. That means a lot of internal discussion, and many papers published that only belong to one step or another. For our subfield, it’s the best way we’ve found to get science done.