Tag Archives: amplitudes

Caltech Amplitudes Workshop, and Valentines Poem 2014

This week’s post will be a short one. I’m at a small workshop for young amplitudes-folks at Caltech, so I’m somewhat busy.

(What we call a workshop is a small conference focused on fostering discussion and collaboration. While there are a few talks to give the workshop structure, most of the time is spent in more informal discussions between the participants.)

There have been a lot of great talks, and a lot of great opportunities to bond with fellow young amplitudeologists. Also, great workshop swag!

Yes, that is a Hot Wheels Mars Rover

Yes, that is a Hot Wheels Mars Rover

Unrelatedly, to continue a tradition from last year, and since it’s Valentine’s Day, allow me to present a short physics-themed poem I wrote a long time ago, this one about the sometimes counter-intuitive laws of thermodynamics:

Thermodynamic Hypothesis

A cold object, like a hot one, must be insulated

Cut off from interaction

Immerse the subject in a bath of warmth

And I reach equilibrium

Update on the Amplituhedron

Awhile back I wrote a post on the Amplituhedron, a type of mathematical object  found by Nima Arkani-Hamed and Jaroslav Trnka that can be used to do calculations of scattering amplitudes in planar N=4 super Yang-Mills theory. (Scattering amplitudes are formulas used to calculate probabilities in particle physics, from the probability that an unstable particle will decay to the probability that a new particle could be produced by a collider.) Since then, they published two papers on the topic, the most recent of which came out the day before New Year’s Eve. These papers laid out the amplituhedron concept in some detail, and answered a few lingering questions. The latest paper focused on one particular formula, the probability that two particles bounce off each other. In discussing this case, the paper serves two purposes:

1. Demonstrating that Arkani-Hamed and Trnka did their homework.

2. Showing some advantages of the amplituhedron setup.

Let’s talk about them one at a time.

Doing their homework

There’s already a lot known about N=4 super Yang-Mills theory. In order to propose a new framework like the amplituhedron, Arkani-Hamed and Trnka need to show that the new framework can reproduce the old knowledge. Most of the paper is dedicated to doing just that. In several sections Arkani-Hamed and Trnka show that the amplituhedron reproduces known properties of the amplitude, like the behavior of its logarithm, its collinear limit (the situation when two momenta in the calculation become parallel), and, of course, unitarity.

What, you heard the amplituhedron “removes” unitarity? How did unitarity get back in here?

This is something that has confused several commenters, both here and on Ars Technica, so it bears some explanation.

Unitarity is the principle that enforces the laws of probability. In its simplest form, unitarity requires that all probabilities for all possible events add up to one. If this seems like a pretty basic and essential principle, it is! However, it and locality (the idea that there is no true “action at a distance”, that particles must meet to interact) can be problematic, causing paradoxes for some approaches to quantum gravity. Paradoxes like these inspired Arkani-Hamed to look for ways to calculate scattering amplitudes that don’t rely on locality and unitarity, and with the amplituhedron he succeeded.

However, just because the amplituhedron doesn’t rely on unitarity and locality, doesn’t mean it violates them. The amplituhedron, for all its novelty, still calculates quantities in N=4 super Yang-Mills. N=4 super Yang-Mills is well understood, it’s well-behaved and cuddly, and it obeys locality and unitarity.

This is why the amplituhedron is not nearly as exciting as a non-physicist might think. The amplituhedron, unlike most older methods, isn’t based on unitarity and locality. However, the final product still has to obey unitarity and locality, because it’s the same final product that others calculate through other means. So it’s not as if we’ve completely given up on basic principles of physics.

Not relying on unitarity and locality is valuable. For those who research scattering amplitudes, it has often been useful to try to “eliminate” one principle or another from our calculations. 20 years ago, avoiding Feynman diagrams was the key to finding dramatic simplifications. Now, many different approaches try to sidestep different principles. (For example, while the amplituhedron calculates an integrand and leaves a final integral to be done, I’m working on approaches that never employ an integrand.)

If we can avoid relying on some “basic” principle, that’s usually good evidence that the principle might be a consequence of something even more basic. By showing how unitarity can arise from the amplituhedron, Arkani-Hamed and Trnka have shown that a seemingly basic principle can come out of a theory that doesn’t impose it.

Advantages of the Amplituhedron

Not all of the paper compares to old results and principles, though. A few sections instead investigate novel territory, and in doing so show some of the advantages and disadvantages of the amplituhedron.

Last time I wrote on this topic, I was unclear on whether the amplituhedron was more efficient than existing methods. At this point, it appears that it is not. While the formula that the amplituhedron computes has been found by other methods up to seven loops, the amplituhedron itself can only get up to three loops or so in practical cases. (Loops are a way that calculations are classified in particle physics. More loops means a more complex calculation, and a more precise final result.)

The amplituhedron’s primary advantage is not in efficiency, but rather in the fact that its mathematical setup makes it straightforward to derive interesting properties for any number of loops desired. As Trnka occasionally puts it, the central accomplishment of the amplituhedron is to find “the question to which the amplitude is the answer”. By being able to phrase this “question” mathematically, one can be very general, which allows them to discover several properties that should hold no matter how complex the rest of the calculation becomes. It also has another implication: if this mathematical question has a complete mathematical answer, that answer could calculate the amplitude for any number of loops. So while the amplituhedron is not more efficient than other methods now, it has the potential to be dramatically more efficient if it can be fully understood.

All that said, it’s important to remember that the amplituhedron is still limited in scope. Currently, it applies to a particular theory, one that doesn’t (and isn’t meant to) describe the real world. It’s still too early to tell whether similar concepts can be defined for more realistic theories. If they can, though, it won’t depend on supersymmetry or string theory. One of the most powerful techniques for making predictions for the Large Hadron Collider, the technique of generalized unitarity, was first applied to N=4 super Yang-Mills. While the amplituhedron is limited now, I would not be surprised if it (and its competitors) give rise to practical techniques ten or twenty years down the line. It’s happened before, after all.

The Amplitudes Revolution Will Not Be Televised (But It Will Be Streamed)

I’ve been at the Simons Center’s workshop on the Geometry and Physics of Scattering Amplitudes all week, so I don’t have time for a long post. There have been a lot of great talks from a lot of great amplitudes-folks (including one on Tuesday by Lance Dixon discussing this work, and one on the same day explaining the much-hyped amplituhedron). Curious folks can follow the conference link above to find videos and slides for each of the talks, arranged by the talk schedule.

I’ve made some great contacts, picked up a couple running jokes (check out Rutger Boels’s talk on Monday and Lance’s talk on Tuesday), heard the phrase “only seven loops” stated in relative seriousness, and heard the story of why the conference ended up choosing an artist’s conception of the amplituhedron for the workshop poster, which I can relate if folks are especially curious.

The Parke-Taylor Amplitudes: Why Quantum Field Theory Might Not Be So Hard, After All

If you’ve been following my blog for a while, you know that Quantum Field Theory is hard work. To calculate anything, you have to draw an ever-increasing number of diagrams, translate them into formulas involving the momentum and energy of your particles, and add all those formulas up to get your final result, the amplitude of the process you’re interested in.

As I said in that post, my area of research involves trying to find patterns in the results of these calculations, patterns that make doing the calculation simpler. With that in mind, you might wonder why we expect to find any patterns in the first place. If Quantum Field Theory is so complicated, what insurance do we have that it can be made simpler? Where does the motivation come from?

Our motivation comes from a series of discoveries that show that things really do simplify, often in unexpected ways. I won’t go through all of these discoveries here, but I want to tell you about one of the first discoveries that showed amplitudes researchers that they were on the right track.

Let’s try to calculate a comparatively simple process. Say that we’ve got two gluons (force carrying bosons for the strong force, an example of a Yang-Mills field). Suppose the two gluons collide, and some number of gluons emerge. It could be two again, or it could be three, or more.

For now, let’s just think about diagrams at tree level, that is, diagrams with no loops. The particles can travel from place to place in the diagram, but they can’t form closed loops on the inside.

Gluons have two types of interactions, places where particle lines can come together. You can either have three lines meeting at one point, or four.

If two gluons come in and two come out, we have four possible diagrams:

4ptMHV

Note that while the last diagram looks like it has a loop in it (in the form of the triangle in the middle), actually that triangle just represents that two particles are passing each other without colliding, so that their lines cross.

The number of diagrams increases substantially as you increase the number of outgoing particles. With two particles going to three particles, you get fifteen diagrams. Here are three examples:

5ptMHV

Since the number of diagrams just keeps increasing, you’d expect the final amplitude to become more and more complicated as well. However, Steven Parke and Tomasz Taylor found in 1986 that for a particular arrangement of the spins of the particles (for technical people: this is the Maximally Helicity Violating configuration, or two particles with negative helicity and all the rest with positive helicity) the answer simplifies dramatically. In the sort of variables we use these days, the result can be expressed in an incredibly simple form:

\frac{\langle 1 | 2 \rangle^4}{ \langle 1 | 2 \rangle\langle 2 | 3 \rangle\langle 3 | 4 \rangle \ldots \langle n-1 | n \rangle\langle n | 1 \rangle}

Here the angle brackets represent momenta of the incoming (for 1 and 2) and outgoing (all the other numbers) particles, with n being the total number of particles (two going in, and however many going out). (Technically, these are spinor-helicity variables, and those interested in the technical details should check out chapter 3 of this or chapter 2 of this.)

Nowadays, we know why this amplitude looks so simple, in terms of something called BCFW recursion. At the time though, it was quite extraordinary.

This is the sort of simplification we keep running into when studying amplitudes. Almost always, it means that there is some deeper principle that we don’t yet understand, something that would let us do our calculations much faster and more efficiently. It indicates that Quantum Field Theory might not be so hard after all.

Where are the Amplitudeologists?

As I’ve mentioned a couple of times before, I’m part of a sub-field of theoretical physics called Amplitudeology.

Amplitudeology in its modern incarnation is relatively new, and concentrated in a few specific centers. I thought it might be interesting to visualize which universities have amplitudeologists, so I took a look at the attendee lists of two recent conferences and put their affiliations into google maps. In an attempt to balance things, one of the conferences is in North America and the other is in Europe. Here is the result:

The West Coast of the US has two major centers, Stanford/SLAC and UCLA, focused around Lance Dixon and Zvi Bern respectively. The Northeast has a fair assortment, including places that have essentially everything like the Perimeter Institute and the Institute for Advanced Study and places known especially for their amplitudes work like Brown.

Europe has quite a large number of places. There are many universities in Europe with a long history of technical research into quantum field theory. When amplitudes began to become more prominent as its own sub-field, many of these places slotted right in. In particular, there are many locations in Germany, a decent number in the UK, a few in the vicinity of CERN, and a variety of places of some importance elsewhere.

Outside of Europe and North America, there’s much less amplitudes research going on. Physics in general is a very international enterprise, and many sub-fields have a lot of participation from researchers in China, India, Japan, and Korea. Amplitudes, for the most part, hasn’t caught on in those places yet.

This map is just a result of looking at two conferences. More data would yield many places that were left out of this setup, including a longstanding community in Russia. Still, it gives you a rough idea of where to find amplitudeologists, should you have need of one.

The Amplituhedron and Other Excellently Silly Words

Nima Arkani-Hamed recently gave a talk at the Simons Center on the topic of what he and Jaroslav Trnka are calling the Amplituhedron.

There’s an article on it in Quanta Magazine. The article starts out a bit hype-y for my taste (too much language of importance, essentially), but it has several very solid descriptions of the history of the situation. I particularly like how the author concisely describes the Feynman diagram picture in the space of a single paragraph, and I would recommend reading that part even if you don’t have time to read the whole article. In general it’s worth it to get a picture of what’s going on.

That said, I obviously think I can clear a few things up, otherwise I wouldn’t be writing about it, so here I go!

“The” Amplituhedron

Nima’s new construction, the Amplituhedron, encodes amplitudes (building blocks of probabilities in particle physics) in N=4 super Yang-Mills as the “area” of a multi-dimensional analog of a polyhedron (hence, Amplitu-hedron).

Now, I’m a big supporter of silly-sounding words with amplitu- at the beginning (amplitudeologist, anyone?), and this is no exception. Anyway, the word Amplitu-hedron isn’t what’s confusing people. What’s confusing people is the word the.

When the Quanta article says that Nima has found “the” Amplituhedron, it makes it sound like he has discovered one central formula that somehow contains the whole universe. If you read the comments, many readers went away with that impression.

In case you needed me to say it, that’s not what is going on. The problem is in the use of the word “the”.

Suppose it was 1886, and I told you that a fellow named Carl Benz had invented “the Automobile”, a marvelous machine that can get everyone to work on time (as well as become the dominant form of life on Long Island).

My use of “the” might make you imagine that Benz invented some single, giant machine that would roam across the country, picking people up and somehow transporting everyone to work. You’d be skeptical of this, of course, expecting that long queues to use this gigantic, wondrous machine would swiftly ruin any speed advantage it might possess…

The Automobile, here to take you to work.

Or, you could view “the” in another light, as indicating a type of thing.

Much like “the Automobile” is a concept, manifested in many different cars and trucks across the country, “the Amplituhedron” is a concept, manifested in many different amplituhedra, each corresponding to a particular calculation that we might attempt.

Advantages…

Each amplituhedron has to do with an amplitude involving a specific number of particles, with a particular number of internal loops. (The Quanta article has a pretty good explanation of loops, here’s mine if you’d rather read that). Based on the problem you’re trying to solve, there are a set of rules that you use to construct the particular amplituhedron you need. The “area” of this amplituhedron (in quotation marks because I mean the area in an abstract, mathematical sense) is the amplitude for the process, which lets you calculate the probability that whatever particle physics situation you’re describing will happen.

Now, we already have many methods to calculate these probabilities. The amplituhedron’s advantage is that it makes these calculations much simpler. What was once quite a laborious and complicated four-loop calculation, Nima claims can be done by hand using amplituhedra. I didn’t get a chance to ask whether the same efficiency improvement holds true at six loops, but Nima’s description made it sound like it would at least speed things up.

[Edit: Some of my fellow amplitudeologists have reminded me of two things. First, that paper I linked above paved the way to more modern methods for calculating these things, which also let you do the four-loop calculation by hand. (You need only six or so diagrams). Second, even back then the calculation wasn’t exactly “laborious”, there were some pretty slick tricks that sped things up. With that in mind, I’m not sure Nima’s method is faster per se. But it is a fast method that has the other advantages described below.]

The amplituhedron has another, more sociological advantage. By describing the amplitude in terms of a geometrical object rather than in terms of our usual terminology, we phrase things in a way that mathematicians are more likely to understand. By making things more accessible to mathematicians (and the more math-headed physicists), we invite them to help us solve our problems, so that together we can come up with more powerful methods of calculation.

Nima and the Quanta article both make a big deal about how the amplituhedron gets rid of the principles of locality and unitarity, two foundational principles of quantum field theory. I’m a bit more impressed by this than Woit is. The fine distinction that needs to be made here is that the amplituhedron isn’t simply “throwing out” locality and unitarity. Rather, it’s written in such a way that it doesn’t need locality and unitarity to function. In the end, the formulas it computes still obey both principles. Nima’s hope is that, now that we are able to write amplitudes without needing locality and unitarity, if we end up having to throw out either of those principles to make a new theory we will be able to do so. That’s legitimately quite a handy advantage to have, it just doesn’t mean that locality and unitarity must be thrown out right now.

…and Disadvantages

It’s important to remember that this whole story is limited to N=4 super Yang-Mills. Nima doesn’t know how to apply it to other theories, and nobody else seems to have any good ideas either. In addition, this only applies to the planar part of the theory. I’m not going to explain what that term means here; for now just be aware that while there are tricks that let you “square” a calculation in super Yang-Mills to get a similar calculation in quantum gravity, those tricks rely on having non-planar data, or information beyond the planar part of the theory. So at this point, this doesn’t give us any new hints about quantum gravity. It’s conceivable that physicists will find ways around both of these limits, but for now this result, though impressive, is quite limited.

Nima hasn’t found some sort of singular “jewel at the heart of physics”. Rather, he’s found a very slick, very elegant, quite efficient way to make calculations within one particular theory. This is profound, because it expresses things in terms that mathematicians can address, and because it shows that we can write down formulas without relying on what are traditionally some of the most fundamental principles of quantum field theory. Only time will tell whether Nima or others can generalize this picture, taking it beyond planar N=4 super Yang-Mills and into the tougher theories that still await this sort of understanding.

Hexagon Functions – or, what is my new paper about?

I’ve got a new paper up on arXiv this week.

(For those of you unfamiliar with it, arXiv.org is a website where physicists, mathematicians, and researchers in related fields post their papers before submitting them to journals. It’s a cultural quirk of physics that probably requires a post in its own right at some point. Anyway…)

What’s it about? Well, the paper is titled Hexagon functions and the three-loop remainder function. Let’s go through that and figure out what it means.

When the paper refers to hexagon functions, it’s referring to functions used to describe situations with six particles involved. An important point to clarify here is that when counting the number of “particles involved”, we add together both the particles that go in and the particles that go out. So if three particles arrive somewhere, interact with each other in some complicated way, and then those three particles leave, that’s a six-particle process. Similarly, if two particles collide and four particles emerge, that’s also a six-particle process. (If you find the idea of more particles coming out than went in confusing, read this post.) Hexagon functions, then, can describe either of those processes.

What, specifically, are these functions being used for? Well, they’re being used to find the three-loop remainder function of N=4 super Yang-Mills.

N=4 super Yang-Mills is my favorite theory. If you haven’t read my posts on the subject, I encourage you to do so.

N=4 super Yang-Mills is so nice because it is so symmetric, and because it takes part in so many dualities. These two traits ended up being enough for Zvi Bern, Lance Dixon, and Vladimir Smirnov to propose an ansatz for all amplitudes in N=4 super Yang-Mills, called the BDS ansatz. (Amplitudes are how we calculate the probability of events occurring: for example, the probability of that “two particles going to four particles” situation I talked about earlier.)

Unfortunately, their formula was incomplete. While it was possible to prove that the formula was true for four-particle and five-particle processes, for six or more particles the formula failed. As it turned out though, it failed in a predictable way. All that was needed to fix it was to add something called the remainder function, the remaining part of the formula beyond the BDS ansatz.

The task, then, was to compute this remainder function.

I’ve talked before about how in quantum field theory, we calculate probabilities through increasingly complicated diagrams, keeping track of the complexity by counting the number of loops. The remainder function had already been computed up to two loops by working out these diagrams, but three looked to be considerably more difficult.

Luckily, we (myself, Lance Dixon, James Drummond, and Jeffrey Pennington) had a trick up our sleeves.

Formulas in N=4 super Yang-Mills have a property called maximal transcendentality. I’ve talked about transcendentality before:  essentially, it’s a way of counting how many powers of pi and logarithms are in your equations. Maximal transcendentality means that every part of the formula has a fixed, maximum number for its degree of transcendentality. In the case of the remainder function, this is two times the number of loops. Thus, the two-loop remainder function has degree of transcendentality four, so it can have pi to the fourth power in it, while the three-loop remainder function (the one that we calculated) has degree of transcendentality six, so it can have pi to the sixth power.

Of course, it can have lots of other expressions as well, which brings us back to the hexagon functions. By classifying the sort of functions that can appear in these formulas at each level of transcendentality, we find the basic building blocks that can show up in the remainder function. All we have to do then is ask what combinations of building blocks are allowed: which ones make good physics sense, for example, or which ones allow our formula to agree with the predictions of other researchers.

As it turns out, once you apply all the restrictions there is only one possible way to put the building blocks together that gives you a functioning formula. By process of elimination, this formula must be the correct three-loop six-point remainder function. Every extra constraint then serves as a check that nothing went wrong and that the formula is sound. Without calculating a single Feynman diagram, we’ve gotten our result!

Just to give you an idea of how complicated this result is, in order to write the formula out fully would take 800 pages. We’ve got shorter ways to summarize it, but perhaps it would be better to give a picture. The formula depends on three variables, called u, v, and w. To show how the formula behaves when all three variables change, here’s a plot of the formula in the variables u and v, for a series of different values of w.

wstacksheaves

Without our various shortcuts to generate this formula, it would have taken an extraordinarily long amount of time. Luckily, N=4 super Yang-Mills’s nice properties save the day, and allow us to achieve what I hope you won’t mind me calling a truly impressive result.

Achieving Transcendence: The Physicist Way

I wanted to shed some light on something I’ve been working on recently, but I realized that a little background was needed to explain some of the ideas. As such, this post is going to be a bit more math-y than usual, but I hope it’s educational!

Pi is special. Familiar to all through the area of a circle \pi r^2, pi is particularly interesting in that you cannot write an algebra equation made up of whole numbers whose solution is pi. While you can easily get fractions (3x=4 gives x=\frac{4}{3}) and even many irrational numbers (x^2=2 gives x=\sqrt{2}), pi is one of a set of numbers that it is impossible to get. These special numbers transcend other numbers, in that you cannot use more everyday numbers to get to them, and as such mathematicians call them transcendental numbers.

In addition to transcendental numbers, you can have transcendental functions. Transcendental functions are functions that can take in a normal number and produce a transcendental number. For example, you may be aware of the delightful equation below:

e^{i \pi}=-1

We can manipulate both sides of this equation by taking the natural logarithm, \ln, to find

i\pi=\ln(-1)

This tells us that the natural logarithm function can take a (negative) whole number (-1) and give us a transcendental number (pi). This means that the natural logarithm is a transcendental function.

There are many other transcendental functions. In addition to logarithms, there are a whole host of related functions called the polylogarithms, and even more generally the harmonic polylogarithms. All of these functions can take in whole numbers like -1 or 1 and give transcendental numbers.

Here physicists introduce a concept called degree of transcendentality, or transcendental weight, which we use to measure how transcendental a number or a function is. Pi (and functions that can give pi, like the natural logarithm) have transcendental weight one. Pi squared has transcendental weight two. Pi cubed (and another number called \zeta(3)) have transcendental weight three. And so on.

Note here that, according to mathematicians, there is no rigorous way that a number can be “more transcendental” than another number. In the case of some of these numbers, like \zeta(5), it hasn’t even been proven that the number is actually transcendental at all! However, physicists still use the concept of transcendental weight because it allows us to classify and manipulate a common and useful set of functions. This is an example of the differences in methods and standards between physicists and mathematicians, even when they are working on similar things.

In what way are these functions common and useful? Well it turns out that in N=4 super Yang-Mills many calculated results are not only made up of these polylogarithms, they have a particular (fixed) transcendental weight. In situations when we expect this to be true, we can use our knowledge to guess most, or even all, of the result without doing direct calculations. That’s immensely useful, and it’s a big part of what I’ve been doing recently.

Ansatz: Progress by Guesswork

I’ve talked before about how hard traditional Quantum Field Theory is. Building things up step by step is slow and inefficient. And like any slow and inefficient process, there is a quicker way. An easier way. A…riskier way.

You guess.

Guess is such an ugly word, though…so let’s call it an ansatz.

Ansatz is a word of German origin. In German, it is part of various idiomatic expressions, where it can refer to an approach, an attempt, or a starting point. When physicists and mathematicians use the term ansatz, they mean a combination of all of these.

An ansatz is an approach in that it is a way of finding a solution to a problem without using more general, inefficient methods. Rather than approaching problems starting from the question, an ansatz approaches problems by starting with an answer, or rather, an attempt at an answer.

An ansatz is an attempt in that it serves as researcher’s best first guess at what the answer is, based on what they know about it. This knowledge can come from several sources. Sometimes, the question constrains the answer, ruling out some possibilities or restricting the output to a particular form. Usually, though, the attempt of an ansatz goes beyond this, incorporating the scientist’s experience as to what sorts of answers similar questions have had in the past, even if it isn’t understood yet why those sorts of answers are common. With information from both of these sources, a scientist comes up with a preliminary guess, or ansatz, as to answer to the problem at hand.

What if the answer is wrong, though? The key here is that an ansatz is only a starting point. Rather than being a full answer with all the details filled in, an ansatz generally leaves some parameters free. These free parameters represent unknowns, and it is up to further tests to fix their values and complete the answer. These tests can be experimental, but they can also be mathematical: often there are restrictions on possible answers that are difficult to apply when creating a first guess, but easier to apply when one has only a few parameters to fix. In order to avoid the risk of finding an ansatz that only works by coincidence, many more tests are done than there are parameters. That way, if the guess behind the ansatz is wrong, then some of the tests will give contradictory rules for the values of the parameters, and you’ll know that it’s time to go back and find a better guess.

In the end, this approach, using your first attempt as a starting point, should end up with only a few parameters free, ideally none at all. One way or another, you have figured out a lot about your question just by guessing the answer!

The use of ansatzes is quite common in theoretical physics. Some of the most interesting problem either can’t be solved or are tedious to solve through traditional means. The only way to make progress, to go beyond what everyone else can already do, is to notice a pattern, make a guess, and hope you get lucky. Well, not just a guess: an ansatz.

N=4: Maximal Particles for Maximal Fun

Part Four of a Series on N=4 Super Yang-Mills Theory

This is the fourth in a series of articles that will explain N=4 super Yang-Mills theory. In this series I take that phrase apart bit by bit, explaining as I go. Because I’m perverse and out to confuse you, I started with the last bit here, and now I’ve reached the final part.

N=4 Super Yang-Mills Theory

Last time I explained supersymmetry as a relationship between two particles, one with spin X and the other with spin X-½. It’s actually a leeetle bit more complicated than that.

When a shape is symmetric, you can turn it around and it will look the same. When a theory is supersymmetric, you can “turn” it, moving from particles with spin X to particles of spin X-½, and the theory will look the same.

With a 2D shape, that’s the whole story. But if you have a symmetric 3D shape, you can turn it in two different directions, moving to different positions, and the shape will look the same either way. In supersymmetry, the number of different ways you can “turn” the theory and still have it look the same is called N.

N=1 symmetric shape

N=2 symmetric shape

Consider the example of super Yang-Mills. If we start out with a particle of spin 1 (a Yang-Mills field), N=1 supersymmetry says that there will also be a particle of spin ½, similar to the particles of everyday matter. But suppose that instead we had N=2 supersymmetry. You can move from the spin 1 particle to spin ½ in one direction, or in the other one, and just like regular symmetry moving in two different directions will get you to two different positions. That means you need two different spin ½ particles! Furthermore, you can also move in one direction, then in the other one: you go from spin 1 to spin ½, then down from spin ½ to spin 0. So our theory can’t just have spin 1 and spin ½, it has to have spin 0 particles as well!

You can keep increasing N, as long as you keep increasing the number and types of particles. Finally, at N=4, you’ve got the maximal set: one Yang-Mills field with spin 1, four different spin ½ particles, and six different spin 0 scalars. The diagram below shows how the particles are related: you start in the center with a Yang-Mills field, and then travel in one of four directions to the spin ½ particles. Picking two of those directions, you travel further, to a scalar in between two spin ½ particles. Applying more supersymmetry just takes you back down: first to spin ½, then all the way back to spin 1.

N=4 super Yang-Mills is where the magic happens. Its high degree of symmetry gives it conformal invariance and dual conformal invariance, it has been observed to have maximal transcendentality and it may even be integrable. Any one of those statements could easily take a full blog post to explain. For now, trust me when I tell you that while N=4 super Yang-Mills may seem complicated, its symmetry means that deep down it is one of the easiest theories to work with, and in fact it might be the simplest non-gravity quantum field theory possible. That makes it an immensely important stepping stone, the first link to take us to a full understanding of particle physics.

One final note: you’re probably wondering why we stopped at N=4. At N=4 we have enough symmetry to go out from spin 1 to spin 0, and then back in to spin 1 again. Any more symmetry, and we need more space, which in this case means higher spin, which means we need to start talking about gravity. Supergravity takes us all the way up to N=8, and has its own delightful properties…but that’s a topic for another day.