Author Archives: 4gravitons

Where are the Amplitudeologists?

As I’ve mentioned a couple of times before, I’m part of a sub-field of theoretical physics called Amplitudeology.

Amplitudeology in its modern incarnation is relatively new, and concentrated in a few specific centers. I thought it might be interesting to visualize which universities have amplitudeologists, so I took a look at the attendee lists of two recent conferences and put their affiliations into google maps. In an attempt to balance things, one of the conferences is in North America and the other is in Europe. Here is the result:

The West Coast of the US has two major centers, Stanford/SLAC and UCLA, focused around Lance Dixon and Zvi Bern respectively. The Northeast has a fair assortment, including places that have essentially everything like the Perimeter Institute and the Institute for Advanced Study and places known especially for their amplitudes work like Brown.

Europe has quite a large number of places. There are many universities in Europe with a long history of technical research into quantum field theory. When amplitudes began to become more prominent as its own sub-field, many of these places slotted right in. In particular, there are many locations in Germany, a decent number in the UK, a few in the vicinity of CERN, and a variety of places of some importance elsewhere.

Outside of Europe and North America, there’s much less amplitudes research going on. Physics in general is a very international enterprise, and many sub-fields have a lot of participation from researchers in China, India, Japan, and Korea. Amplitudes, for the most part, hasn’t caught on in those places yet.

This map is just a result of looking at two conferences. More data would yield many places that were left out of this setup, including a longstanding community in Russia. Still, it gives you a rough idea of where to find amplitudeologists, should you have need of one.

High Energy? What does that mean?

I am a high energy physicist who uses the high energy and low energy limits of a theory that, while valid up to high energies, is also a low-energy description of what at high energies ends up being string theory (string theorists, of course, being high energy physicists as well).

If all of that makes no sense to you, congratulations, you’ve stumbled upon one of the worst-kept secrets of theoretical physics: we really could use a thesaurus.

“High energy” means different things in different parts of physics. In general, “high” versus “low” energy classifies what sort of physics you look at. “High” energy physics corresponds to the very small, while “low” energies encompass larger structures. Many people explain this via quantum mechanics: the uncertainty principle says that the more certain you are of a particle’s position, the less certain you can be of how fast it is going, which would imply that a particle that is highly restricted in location might have very high energy. You can also understand it without quantum mechanics, though: if two things are held close together, it generally has to be by a powerful force, so the bond between them will contain more energy. Another perspective is in terms of light. Physicists will occasionally use “IR”, or infrared, to mean “low energy” and “UV”, or ultraviolet, to mean “high energy”. Infrared light has long wavelengths and low energy photons, while ultraviolet light has short wavelengths and high energy photons, so the analogy is apt. However, the analogy only goes so far, since “UV physics” is often at energies much greater than those of UV light (and the same sort of situation applies for IR).

So what does “low energy” or “high energy” mean? Well…

The IR limit: Lowest of the “low energy” points, this refers to the limit of infinitely low energy. While you might compare it to “absolute zero”, really it just refers to energy that’s so low that compared to the other energies you’re calculating with it might as well be zero. This is the “low energy limit” I mentioned in the opening sentence.

Low energy physics: Not “high energy physics”. Low energy physics covers everything from absolute zero up to atoms. Once you get up to high enough energy to break up the nucleus of an atom, you enter…

High energy physics: Also known as “particle physics”, high energy physics refers to the study of the subatomic realm, which also includes objects which aren’t technically particles like strings and “branes”. If you exclude nuclear physics itself, high energy physics generally refers to energies of a mega-electron-volt and up. For comparison, the electrons in atoms are bound by energies of around an electron-volt, which is the characteristic energy of chemistry, so high energy physics is at least a million times more energetic. That said, high energy physicists are often interested in low energy consequences of their theories, including all the way down to the IR limit. Interestingly, by this point we’ve already passed both infrared light (from a thousandth of an electron-volt to a single electron volt) and ultraviolet light (several electron-volts to a hundred or so). Compared to UV light, mega-electron volt scale physics is quite high energy.

The TeV scale: If you’re operating a collider though, mega-electron-volts (or MeV) are low-energy physics. Often, calculations for colliders will assume that quarks, whose masses are around the MeV scale, actually have no mass at all! Instead, high energy for particle colliders means giga (billion) or tera (trillion) electron volt processes. The LHC, for example, operates at around 7 TeV now, with 14 TeV planned. This is the range of scales where many had hoped to see supersymmetry, but as time has gone on results have pushed speculation up to higher and higher energies. Of course, these are all still low energy from the perspective of…

The string scale: Strings are flexible, but under enormous tension that keeps them very very short. Typically, strings are posed to be of length close to the Planck length, the characteristic length at which quantum effects become relevant for gravity. This enormously small length corresponds to the enormously large Planck energy, which is on the order of 1028 electron-volts. That’s about ten to the sixteen times the energies of the particles at the LHC, or ten to the twenty-two times the MeV scale that I called “high energy” earlier. For comparison, there are about ten to the twenty-two atoms in a milliliter of water. When extra dimensions in string theory are curled up, they’re usually curled up at this scale. This means that from a string theory perspective, going to the TeV scale means ignoring the high energy physics and focusing on low energy consequences, which is why even the highest mass supersymmetric particles are thought of as low energy physics when approached from string theory.

The UV limit: Much as the IR limit is that of infinitely low energy, the UV limit is the formal limit of infinitely high energy. Again, it’s not so much an actual destination, as a comparative point where the energy you’re considering is much higher than the energy of anything else in your calculation.

These are the definitions of “high energy” and “low energy”, “UV” and “IR” that one encounters most often in theoretical particle physics and string theory. Other parts of physics have their own idea of what constitutes high or low energy, and I encourage you to ask people who study those parts of physics if you’re curious.

What’s up with arXiv?

First of all, I wanted to take a moment to say that this is the one-year anniversary of this blog. I’ve been posting every week, (almost always) on Friday, since I first was motivated to start blogging back in November 2012. It’s been a fun ride, through ups and downs, Ars Technica and Amplituhedra, and I hope it’s been fun for you, the reader, as well!

I’ve been giving links to arXiv since my very first post, but I haven’t gone into detail about what arXiv is. Since arXiv is a rather unique phenomenon, it could use a more full description.

arXivpic

The word arXiv is pronounced much like the normal word archive, just think of the capital X like a Greek letter Chi.

Much as the name would suggest, arXiv is an archive, specifically a preprint archive. A pre-print is in a sense a paper before it becomes a paper; more accurately, it is a scientific paper that has not yet been published in a journal. In the past, such preprints would be kept by individual universities, or passed between interested individuals. Now arXiv, for an increasing range of fields (first physics and mathematics, now also computer science, quantitative biology, quantitative finance, and statistics) puts all of the preprints in one easily accessible, free to access place.

Different fields have different conventions when it comes to using arXiv. As a theoretical physicist, I can only really speak to how we use the system.

When theoretical physicists write a paper, it is often not immediately clear which journal we should submit it to. Different journals have different standards, and a paper that will gather more interest can be published in a more prestigious journal. In order to gauge how much interest a paper will raise, most theoretical physicists will put their papers up on arXiv as preprints first, letting them sit there for a few months to drum up attention and get feedback before formally submitting the paper to a journal.

The arXiv isn’t just for preprints, though. Once a paper is published in a journal, a copy of the paper remains on arXiv. Often, the copy on arXiv will be updated when the paper is updated, changed to the journal’s preferred format and labeled with the correct journal reference. So arXiv, ultimately, contains almost all of the papers published in theoretical physics in the last decade or two, all free to read.

But it’s not just papers! The digital format of arXiv makes it much easier to post other files alongside a paper, so that many people upload not just their results, but the computer code they used to generate them, or their raw data in long files. You can also post papers too long or unwieldy to publish in a journal, making arXiv an excellent dropping-off point for information in whatever format you think is best.

We stand at the edge of a new age of freely accessible science. As more and more disciplines start to use arXiv and similar services, we’ll have more flexibility to get more information to more people, while still keeping the advantage of peer review for publication in actual journals. It’s going to be very interesting to see where things go from here.

Braaains…Boltzmann Braaaains…

In honor of Halloween yesterday, let me tell you a scary physics story:

Sarah was an ordinary college student, in an ordinary dorm room, ordinary bean bag chairs strewn around an ordinary bed with ordinary pink sheets. If she concentrated, she could imagine her ordinary parents back home in ordinary Minnesota. In her ordinary physics textbook on her ordinary desk, ordinary laws of physics were written, described as the result of centuries of experimentation.

Unbeknownst to Sarah, the universe was much more chaotic and random than she realized, and also much more vast. Arbitrary collections of matter formed and dissipated, and over the universe’s long history, any imaginable combination might come to be.

Combinations like Sarah.

You see, Sarah too was a random combination, a chance arrangement of particles formed only a bare few moments ago. In truth, she had no ordinary parents, nor was she surrounded by an ordinary college, and the laws of physics that her textbook asserted were discovered through centuries of experimentation were just a moment’s distribution of ink on a page.

And as she got up to open the door into the vast dark of the outside, her world dissipated, and she ceased to exist.

That’s the life of a Boltzmann Brain. If a universe is random and old enough, it is inevitable that such minds exist. They might have memories of an extended, orderly world, but these would just be illusions, chance arrangements of their momentary neurons. What’s more, they may think they know the laws of physics through careful experiment and reasoning, but such knowledge would be illusory as well. And most frightening of all, if the universe is truly ancient and unimaginably vast, there would be many orders of magnitude more Boltzmann Brains than real humans…so many, that it would almost certainly be the case that you are in fact a Boltzmann Brain right now!

This is legitimately worrying to some physicists. The situation gets a bit more interesting when you remember that, as a Boltzmann Brain, anything you know about physics may well be a lie, since the history of research you think exists might not have. The problem is, if you manage to prove that you are probably a Boltzmann Brain, you had to use physics to do it. But your physics is probably wrong!

This, as Sean Carroll argues is why the concept of a Boltzmann Brain is self-defeating. It is, in a way, a logical impossibility. And if a universe of Boltzmann Brains is logically impossible, then any physics that makes Boltzmann Brains more likely than normal humans must similarly be wrong. That’s Carroll’s argument, one that he uses to argue for specific physical conclusions about the real world, namely a proposal about the properties of the Higgs boson.

It might seem philosophically illegitimate to use such a paradox to argue about the real world. However, philosophers have a similar argument when it comes to such “reality is a lie” scenarios. In general, modern philosophers point out that any argument that proves that all of our knowledge is false or meaningless by necessity also proves itself false or meaningless. This is what allows analytical philosophy to carry forward and make progress, even if it can’t reject the idea that reality is an illusion by more objective means.

With that said, there seems to be a difference between simply rejecting arguments that “show” that the world is an illusion or that we are all Boltzmann Brains, and using those arguments to draw conclusions about other parts of the world. I would be curious if there are similar arguments to Carroll’s in philosophy, arguments that draw conclusions more specific than “we exist and can know things”. Any philosopher readers should feel welcome to chime in in the comments!

And for the rest of you, you probably aren’t a Boltzmann Brain. But if the outside world looks a little too dark tonight…

Visiting Brandeis

I gave a talk at Brandeis’s High Energy Theory Seminar this week. Brandeis is much easier to park at than Brown, but it’s proportionately easier to get lost in. While getting lost, I happened to run into this:

Many campuses have buildings that look like castles. Usen Castle is the only one I’ve seen that officially calls itself a castle. It’s a dorm, and the students there can honestly say that they live in a castle.

If I were a responsible, mature blogger, I’d tell you about the history of the place. I’d tell you who built it, and why they felt it was appropriate to make a literal castle the center of their college.

As I’m not a responsible, mature blogger, I’ll just leave you with this thought: they have a castle!

Blackboards, Again

Recently I had the opportunity to give a blackboard talk. I’ve talked before about the value of blackboards, how they facilitate collaboration and can even be used to get work done. What I didn’t feel the need to explain was their advantages when giving a talk.

No, the blackboard behind me isn't my talk.

No, the blackboard behind me isn’t my talk.

When I mentioned I was giving a blackboard talk, some of my friends in other fields were incredulous.

“Why aren’t you using PowerPoint? Do you people hate technology?”

So why do theorists (and mathematicians) do blackboard talks, when many other fields don’t?

Typically, a chemist can’t bring chemicals to a talk. A biologist can’t bring a tank of fruit flies or zebrafish, and a psychologist probably shouldn’t bring in a passel of college student test subjects. As a theorist though, our test subjects are equations, and we can absolutely bring them into the room.

In the most ideal case, a talk by a theorist walks you through their calculation, reproducing it on the blackboard in enough detail that you can not only follow along, but potentially do the calculation yourself. While it’s possible to set up a calculation step by step in PowerPoint, you don’t have the same flexibility to erase and add to your equations, which becomes especially important if you need to clarify a point in response to a question.

Blackboards also often give you more space than a single slide. While your audience still only pays attention to a slide-sized area of the board at one time, you can put equations up in one area, move away, and then come back to them later. If you leave important equations up, people can remind themselves of them on their own time, without having to hold everybody up while you scroll back through the slides to the one they want to see.

Using a blackboard well is a fine art, and one I’m only beginning to learn. You have to know what to erase and what to leave up, when to pause to allow time to write or ask questions, and what to say while you’re erasing the board. You need to use all the quirks of the medium to your advantage, to show people not just what you did, but how and why you did it.

That’s why we use blackboards. And if you ask why we can’t do the same things with whiteboards, it’s because whiteboards are terrible. Everybody knows that.

What are Vacua? (A Point about the String Landscape)

A couple weeks back, there was a bit of a scuffle between Matt Strassler and Peter Woit on the subject of predictions in string theory (or more properly, the question of whether any predictions can be made at all). As a result, Strassler has begun a series on the subject of quantum field theory, string theory, and predictions.

Strassler hasn’t gotten to the topic of string vacua yet, but he’s probably going to cover the subject in a future post. While his take on the subject is likely to be more expansive and precise than mine, I think my perspective on the problem might still be of interest.

Let’s start with the basics: one of the problems often cited with string theory is the landscape problem, the idea that string theory has a metaphorical landscape of around 10^500 vacua.

What are vacua?

Vacua is the plural of vacuum.

Ok, and?

A vacuum is empty space.

That’s what you thought, right? That’s the normal meaning of vacuum. But if a vacuum is empty, how can there be more than one of them, let alone 10^500?

“Empty” is subjective.

Now we’re getting somewhere. The problem with defining a concept like “empty space” in string theory or field theory is that it’s unclear what precisely it should be empty of. Naively, such a space should be empty of “stuff”, or “matter”, but our naive notions of “matter” don’t apply to field theory or string theory. In fact, there is plenty of “stuff” that can be present in “empty” space.

Think about two pieces of construction paper. One is white, the other is yellow. Which is empty? Neither has anything drawn on it, so while one has a color and the other does not, both are empty.

“Empty space” doesn’t come in multiple colors like construction paper, but there are equivalent parameters that can vary. In quantum field theory, one option is for scalar fields to take different values. In string theory, different dimensions can be curled up in different ways (as an aside, when string theory leads to a quantum field theory often these different curling-up shapes correspond to different values for scalar fields, so the two ideas are related).

So if space can have “stuff” in it and still count as empty, are there any limits on what can be in it?

As it turns out, there is a quite straightforward limit. But to explain it, I need to talk a bit about why physicists care about vacua in the first place.

Why do physicists care about vacua?

In physics, there is a standard modus operandi for solving problems. If you’ve taken even a high school physics course, you’ve probably encountered it in some form. It’s not the only way to solve problems, but it’s one of the easiest. The idea, broadly, is the following:

First get the initial conditions, and then use the laws of physics to see what happens next.

In high school physics, this is how almost every problem works: your teacher tells you what the situation is, and you use what you know to figure out what happens next.

In quantum field theory, things are a bit more subtle, but there is a strong resemblance. You start with a default state, and then find the perturbations, or small changes, around that state.

In high school, your teacher told you what the initial conditions were. In quantum field theory, you need another source for the “default state”. Sometimes, you get that from observations of the real world. Sometimes, though, you want to make a prediction that goes beyond what your observations tell you. In that case, one trick often proves useful:

To find the default state, find which state is stable.

If your system starts out in a state that is unstable, it will change. It will keep changing until eventually it changes into a stable state, where it will stop changing. So if you’re looking for a default state, that state should be one in which the system is stable, where it won’t change.

(I’m oversimplifying things a bit here to make them easier to understand. In particular, I’m making it sound like these things change over time, which is a bit of a tricky subject when talking about different “default” states for the whole of space and time. There’s also a cool story connected to this about why tachyons don’t exist, which I’d love to go into for another post.)

Since we know that the “default” state has to be stable, if there is only one stable state, we’ve found the default!

Because of this, we can lay down a somewhat better definition:

A vacuum is a stable state.

There’s more to the definition than this, but this should be enough to give you the feel for what’s going on. If we want to know the “default” state of the world, the state which everything else is just a small perturbation on top of, we need to find a vacuum. If there is only one plausible vacuum, then our work is done.

When there are many plausible vacua, though, we have a problem. When there are 10^500 vacua, we have a huge problem.

That, in essence, is why many people despair of string theory ever making any testable predictions. String theory has around 10^500 plausible vacua (for a given, technical, meaning of plausible).

It’s important to remember a few things here.

First, the reason we care about vacuum states is because we want a “default” to make predictions around. That is, in a sense, a technical problem, in that it is an artifact of our method. It’s a result of the fact that we are choosing a default state and perturbing around it, rather than proving things that don’t depend on our choice of default state. That said, this isn’t as useful an insight as it might appear, and as it turns out there is generally very little that can be predicted without choosing a vacuum.

Second, the reason that the large number of vacua is a problem is that if there was only one vacuum, we would know which state was the default state for our world. Instead, we need some other method to pick, out of the many possible vacua, which one to use to make predictions. That is, in a sense, a philosophical problem, in that it asks what seems ostensibly to be a philosophical question: what is the basic, default state of the universe?

This happens to be a slightly more useful insight than the first one, and it leads to a number of different approaches. The most intuitive solution is to just shrug and say that we will see which vacuum we’re in by observing the world around us. That’s a little glib, since many different vacua could lead to very similar observations. A better tactic might be to try to make predictions on general grounds by trying to see what the world we can already observe implies about which vacua are possible, but this is also quite controversial. And there are some people who try another approach, attempting to pick a vacuum not based on observations, but rather on statistics, choosing a vacuum that appears to be “typical” in some sense, or that satisfies anthropic constraints. All of these, again, are controversial, and I make no commentary here about which approaches are viable and which aren’t. It’s a complicated situation and there are a fair number of people working on it. Perhaps, in the end, string theory will be ruled un-testable. Perhaps the relevant solution is right under peoples’ noses. We just don’t know.

Brown, Blue, and Birds

I gave a talk at Brown this week, so this post may be shorter than usual. On the topic of Brown I don’t have much original to say: the people were friendly, the buildings were brownish-colored, and bringing a car there was definitely a bad idea. Don’t park at Brown. Not even then.

There’s a quote from Werner Heisenberg that has been making the rounds of the internet. It comes out of a 1976 article by Felix Bloch where he describes taking a walk with Heisenberg, when the discussion turned to the subject of space and time:

I had just read Weyl’s book Space, Time and Matter, and under its influence was proud to declare that space was simply the field of linear operations.

“Nonsense,” said Heisenberg, “space is blue and birds fly through it.”

Heisenberg’s point is that sometimes in physics you need to ask what your abstractions are really describing. You need to make sure that you haven’t stretched your definitions too badly away from their original inspiration.

When people first hear that string theory requires eleven dimensions, many wonder if this point applies. In mathematics, it’s well known that a problem can be described in many dimensions more than the physical dimensions of space. There’s a lovely example in the book Flatterland (a sequel to Flatland, a book which any math-y person should read at least once) of the dimensions of a bike. The bike’s motion through space gives three dimensions: up/down, backward/forward, and left/right. However, the bike can move in other ways: its gears can each be in a different position, as can its handlebars, as can the wheels…in the end, a bike can be envisioned as having many more “dimensions” than our normal three-dimensional space, each corresponding to some internal position.

Is string theory like this? No.

The first hint of the answer comes from something called F theory. String theory is part of something larger called M theory, and since M theory has eleven dimensions this is usually the number of dimensions given. But F theory contains string theory in a certain sense as well, only F theory contains twelve dimensions.

So why don’t string theorists say that the world has twelve dimensions?

As it turns out, the extra dimension added by F theory isn’t “really” a dimension. It’s much more like the mathematical dimensions of a bike’s gears and wheels than it is like the other eleven dimensions of M theory.

What’s the difference? What, according to a string theorist, is the definition of a dimension of space?

It’s simple: Space is “blue” (or colorless, I suppose). Birds (and particles, and strings, and membranes) fly in it.

We’re using the same age-old distinction that Heisenberg was, in a way. What is space? Space is just a place where things can move, in the same way they move in our usual three dimensions. Space is where you have momentum, where that momentum can change your position. Space is where forces act, the set of directions in which something can be pulled or pushed in a symmetric way. Space can’t be reduced, at least not without a lot of tricks: a bird flying isn’t just another description of a lizard crawling, not in the way a bicycle’s gears moving can be thought of as turning through our normal three dimensions without any extra ones. And while F theory doesn’t fit this criterion, M theory really does. The membranes of M theory fly around in eleven dimensional space-time, just like a bird moves through three space and one time dimensions.

Space for a string theorist isn’t any crazier or more abstract than it is for you. It’s just a place where things can move.

Planar vs. Non-Planar: A Colorful Story

Last week, I used two terms, planar theory and non-planar theory, without defining them. This week, I’m going to explain what they mean, and why they’re important.

Suppose you’re working with a Yang-Mills theory (not necessarily N=4 super Yang-Mills. To show you the difference between planar and non-planar, I’ll draw some two-loop Feynman diagrams for a process where two particles go in and two particles come out:

planarity1

The diagram on your left is planar, while the diagram on your right is non-planar. The diagram on the left can be written entirely on a flat page (or screen), with no tricks. By contrast, with the diagram on the right I have to cheat and make one of the particle lines jump over another one (that’s what the arrow is meant to show). Try as you might, you can’t twist that diagram so that it lies flat on a plane (at least not while keeping the same particles going in and out). That’s the difference between planar and non-planar.

Now, what does it mean for a theory to be planar or non-planar?

Let’s review some facts about Yang-Mills theories. (For a more detailed explanation, see here). In Yang-Mills there are a certain number of colors, where each one works a bit like a different kind of electric charge. The strong force, the force that holds protons and neutrons together, has three colors, usually referred to as red, blue, and green (this is of course just jargon, not the literal color of the particles).

Forces give rise to particles. In the case of the strong force, those particles are called gluons. Each gluon has a color and an anti-color, where you can think of the color like a positive charge and the anti-color like a negative charge. A given gluon might be red-antiblue, or green-antired, or even red-antired.

While the strong force has three colors, for this article it will be convenient to pretend that there are four: red, green, blue, and yellow.

An important principle of Yang-Mills theories is that color must be conserved. Since anti-colors are like negative colors, they can cancel normal colors out. So if you’ve got a red-antiblue gluon that collides with a blue-antigreen gluon, the blue and antiblue can cancel each other out, and you can end up with, for example, red-antiyellow and yellow-antigreen instead.

Let’s consider that process in particular. There are lots of Feynman diagrams you can draw for it, let’s draw one of the simplest ones first:

planarity2

The diagram on the left just shows the process in terms of the particles involved: two gluons go in, two come out.

The other diagram takes into account conservation of colors. The red from the red-antiblue gluon becomes the red in the red-antiyellow gluon on the other side. The antiblue instead goes down and meets the blue from the blue-antigreen gluon, and both vanish in the middle, cancelling each other out. It’s as if the blue color entered the diagram, then turned around backwards and left it again. (If you’ve ever heard someone make the crazy-sounding claim that antimatter is normal matter going backwards in time, this is roughly what they mean.)

From this diagram, we can start observing a general principle: to make sure that color is conserved, each line must have only one color.

Now let’s try to apply this principle to the two-loop diagrams from the beginning of the article. If you draw double lines like we did in the last example, fill in the colors, and work things out, this is what you get:

planarity3

What’s going on here?

In the diagram on the left, you see the same lines as the earlier diagram on the outside. On the inside, though, I’ve drawn two loops of color, purple and pink.

I drew the lines that way because, just based on the external lines, you don’t know what color they should be. They could be red, or yellow, or green, or blue. Nothing tells you which one is right, so all of them are possible.

Remember that for Feynman diagrams, we need to add up every diagram we can draw to get the final result. That means that there are actually four times four or sixteen copies of this diagram, each one with different colors in the loops.

Now let’s look at the other diagram. Like the first one, it’s a diagram with two loops. However, in this case, the inside of both loops is blue. If you like, you can try to trace out the lines in the loops. You’ll find that they’re all connected together. Because this diagram is non-planar, color conservation fixes the color in the loops.

So while there are sixteen copies of the first diagram, there is only one possible version of the second one. Since you add all the diagrams together, that means that the first diagram is sixteen times more important than the second diagram.

Now suppose we had more than four colors. Lots more.

More than that…

With ten colors, the planar diagrams are a hundred times more important. With a hundred colors, they are ten thousand times more important. Keep increasing the number of colors, and it gets to the point where you can honestly say that the non-planar diagrams don’t matter at all.

What, then, is a “planar theory”?

A planar theory is a theory with a very large (infinite) number of colors.

In a planar theory, you can ignore the non-planar diagrams and focus only on the planar ones.

Nima Arkani-Hamed’s Amplituhedron method applies to the planar version of N=4 super Yang-Mills. There is a lot of progress on the planar version of the theory, and it is because the restriction to planar diagrams makes things simpler.

However, sometimes you need to go beyond planar diagrams. There are relationships between planar and non-planar diagrams, based on the ways that you can pair different colors together in the theory. Fully understanding this relationship is powerful for understanding Yang-Mills theory, but, as it turns out, it’s also the key to relating Yang-Mills theory to gravity! But that’s a story for another post.

The Amplituhedron and Other Excellently Silly Words

Nima Arkani-Hamed recently gave a talk at the Simons Center on the topic of what he and Jaroslav Trnka are calling the Amplituhedron.

There’s an article on it in Quanta Magazine. The article starts out a bit hype-y for my taste (too much language of importance, essentially), but it has several very solid descriptions of the history of the situation. I particularly like how the author concisely describes the Feynman diagram picture in the space of a single paragraph, and I would recommend reading that part even if you don’t have time to read the whole article. In general it’s worth it to get a picture of what’s going on.

That said, I obviously think I can clear a few things up, otherwise I wouldn’t be writing about it, so here I go!

“The” Amplituhedron

Nima’s new construction, the Amplituhedron, encodes amplitudes (building blocks of probabilities in particle physics) in N=4 super Yang-Mills as the “area” of a multi-dimensional analog of a polyhedron (hence, Amplitu-hedron).

Now, I’m a big supporter of silly-sounding words with amplitu- at the beginning (amplitudeologist, anyone?), and this is no exception. Anyway, the word Amplitu-hedron isn’t what’s confusing people. What’s confusing people is the word the.

When the Quanta article says that Nima has found “the” Amplituhedron, it makes it sound like he has discovered one central formula that somehow contains the whole universe. If you read the comments, many readers went away with that impression.

In case you needed me to say it, that’s not what is going on. The problem is in the use of the word “the”.

Suppose it was 1886, and I told you that a fellow named Carl Benz had invented “the Automobile”, a marvelous machine that can get everyone to work on time (as well as become the dominant form of life on Long Island).

My use of “the” might make you imagine that Benz invented some single, giant machine that would roam across the country, picking people up and somehow transporting everyone to work. You’d be skeptical of this, of course, expecting that long queues to use this gigantic, wondrous machine would swiftly ruin any speed advantage it might possess…

The Automobile, here to take you to work.

Or, you could view “the” in another light, as indicating a type of thing.

Much like “the Automobile” is a concept, manifested in many different cars and trucks across the country, “the Amplituhedron” is a concept, manifested in many different amplituhedra, each corresponding to a particular calculation that we might attempt.

Advantages…

Each amplituhedron has to do with an amplitude involving a specific number of particles, with a particular number of internal loops. (The Quanta article has a pretty good explanation of loops, here’s mine if you’d rather read that). Based on the problem you’re trying to solve, there are a set of rules that you use to construct the particular amplituhedron you need. The “area” of this amplituhedron (in quotation marks because I mean the area in an abstract, mathematical sense) is the amplitude for the process, which lets you calculate the probability that whatever particle physics situation you’re describing will happen.

Now, we already have many methods to calculate these probabilities. The amplituhedron’s advantage is that it makes these calculations much simpler. What was once quite a laborious and complicated four-loop calculation, Nima claims can be done by hand using amplituhedra. I didn’t get a chance to ask whether the same efficiency improvement holds true at six loops, but Nima’s description made it sound like it would at least speed things up.

[Edit: Some of my fellow amplitudeologists have reminded me of two things. First, that paper I linked above paved the way to more modern methods for calculating these things, which also let you do the four-loop calculation by hand. (You need only six or so diagrams). Second, even back then the calculation wasn’t exactly “laborious”, there were some pretty slick tricks that sped things up. With that in mind, I’m not sure Nima’s method is faster per se. But it is a fast method that has the other advantages described below.]

The amplituhedron has another, more sociological advantage. By describing the amplitude in terms of a geometrical object rather than in terms of our usual terminology, we phrase things in a way that mathematicians are more likely to understand. By making things more accessible to mathematicians (and the more math-headed physicists), we invite them to help us solve our problems, so that together we can come up with more powerful methods of calculation.

Nima and the Quanta article both make a big deal about how the amplituhedron gets rid of the principles of locality and unitarity, two foundational principles of quantum field theory. I’m a bit more impressed by this than Woit is. The fine distinction that needs to be made here is that the amplituhedron isn’t simply “throwing out” locality and unitarity. Rather, it’s written in such a way that it doesn’t need locality and unitarity to function. In the end, the formulas it computes still obey both principles. Nima’s hope is that, now that we are able to write amplitudes without needing locality and unitarity, if we end up having to throw out either of those principles to make a new theory we will be able to do so. That’s legitimately quite a handy advantage to have, it just doesn’t mean that locality and unitarity must be thrown out right now.

…and Disadvantages

It’s important to remember that this whole story is limited to N=4 super Yang-Mills. Nima doesn’t know how to apply it to other theories, and nobody else seems to have any good ideas either. In addition, this only applies to the planar part of the theory. I’m not going to explain what that term means here; for now just be aware that while there are tricks that let you “square” a calculation in super Yang-Mills to get a similar calculation in quantum gravity, those tricks rely on having non-planar data, or information beyond the planar part of the theory. So at this point, this doesn’t give us any new hints about quantum gravity. It’s conceivable that physicists will find ways around both of these limits, but for now this result, though impressive, is quite limited.

Nima hasn’t found some sort of singular “jewel at the heart of physics”. Rather, he’s found a very slick, very elegant, quite efficient way to make calculations within one particular theory. This is profound, because it expresses things in terms that mathematicians can address, and because it shows that we can write down formulas without relying on what are traditionally some of the most fundamental principles of quantum field theory. Only time will tell whether Nima or others can generalize this picture, taking it beyond planar N=4 super Yang-Mills and into the tougher theories that still await this sort of understanding.