Tag Archives: theoretical physics

Map Your Dead Ends

I’m at Brown this week, where I’ve been chatting with Mark Spradlin and Anastasia Volovich, two of the founding figures of my particular branch of amplitudeology. Back in 2010 they figured out how to turn this seventeen-page two-loop amplitude:

Why yes, this is one equation that covers seventeen pages. You're lucky I didn't post the eight-hundred page one.

into a formula that just takes up two lines:gsvvformThis got everyone very excited, it inspired some of my collaborators to do work that would eventually give rise to the Hexagon Functions, my main research project for the past few years.

Unfortunately, when we tried to push this to higher loops, we didn’t get the sort of nice, clean-looking formulas that the Brown team did. Each “loop” is an additional layer of complexity, a series of approximations that get closer to the exact result. And so far, our answers look more like that first image than the second: hundreds of pages with no clear simplifications in sight.

At the time, people wondered whether some simple formula might be enough. As it turns out, you can write down a formula similar to the one found by Spradlin and Volovich, generalized to a higher number of loops. It’s clean, it’s symmetric, it makes sense…and it’s not the right answer.

That happens in science a lot more often than science fans might expect. When you hear about this sort of thing in the news, it always works: someone suggests a nice, simple answer, and it turns out to be correct, and everyone goes home happy. But for every nice simple guess that works, there are dozens that don’t: promising ideas that just lead to dead ends.

One of the postdocs here at Brown worked on this “wrong” formula, and while chatting with him here he asked a very interesting question: why is it wrong? Sure, we know that it’s wrong, we can check that it’s wrong…but what, specifically, is missing? Is it “part” of the right answer in some sense, with some predictable corrections?

As it turns out, this is a very interesting question! We’ve been looking into it, and the “wrong” answer has some interesting relationships with some of our Hexagon Functions. It may have been a “dead end”, but it still could turn out to be a useful one.

A good physics advisor will tell their students to document their work. This doesn’t just mean taking notes: most theoretical physicists will maintain files, in standard journal article format, with partial results. One reason to do this is that, if things work out, you’ll have some of your paper already written. But if something doesn’t work out, you’ll end up with a pdf on your hard drive carefully explaining an idea that didn’t quite work. Physicists often end up with dozens of these files squirreled away on their computers. Put together, they’re a map: a map of dead ends.

There’s a handy thing about having a map: it lets you retrace your steps. Any one of these paths may lead nowhere, but each one will contain some substantive work. And years later, often enough, you end up needing some of it: some piece of the calculation, some old idea. You follow the map, dig it up…and build it into something new.

Using Effective Language

Physicists like to use silly names for things, but sometimes it’s best to just use an everyday word. It can trigger useful intuitions, and it makes remembering concepts easier. What gets confusing, though, is when the everyday word you use has a meaning that’s not quite the same as the colloquial one.

“Realism” is a pretty classic example, where Bell’s elegant use of the term in quantum mechanics doesn’t quite match its common usage, leading to inevitable confusion whenever it’s brought up. “Theory” is such a useful word that multiple branches of science use it…with different meanings! In both cases, the naive meaning of the word is the basis of how it gets used scientifically…just not the full story.

There are two things to be wary of here. First, those of us who communicate science must be sure to point out when a word we use doesn’t match its everyday meaning, to guide readers’ intuitions away from first impressions to understand how the term is used in our field. Second, as a reader, you need to be on the look-out for hidden technical terms, especially when you’re reading technical work.

I remember making a particularly silly mistake along these lines. It was early on in grad school, back when I knew almost nothing about quantum field theory. One of our classes was a seminar, structured so that each student would give a talk on some topic that could be understood by the whole group. Unfortunately, some grad students with deeper backgrounds in theoretical physics hadn’t quite gotten the memo.

It was a particular phrase that set me off: “This theory isn’t an effective theory”.

My immediate response was to raise my hand. “What’s wrong with it? What about this theory makes it ineffective?”

The presenter boggled for a moment before responding. “Well, it’s complete up to high energies…it has no ultraviolet divergences…”

“Then shouldn’t that make it even more effective?”

After a bit more of this back-and-forth, we finally cleared things up. As it turns out, “effective field theory” is a technical term! An “effective field theory” is only “effectively” true, describing physics at low energies but not at high energies. As you can see, the word “effective” here is definitely pulling its weight, helping to make the concept understandable…but if you don’t recognize it as a technical term and interpret it literally, you’re going to leave everyone confused!

Over time, I’ve gotten better at identifying when something is a technical term. It really is a skill you can learn: there are different tones people use when speaking, different cadences when writing, a sense of uneasiness that can clue you in to a word being used in something other than its literal sense. Without that skill, you end up worried about mathematicians’ motives for their evil schemes. With it, you’re one step closer to what may be the most important skill in science: the ability to recognize something you don’t know yet.

What’s so Spooky about Action at a Distance?

With Halloween coming up, it’s time once again to talk about the spooky side of physics. And what could be spookier than action at a distance?

Pictured here.

Ok, maybe not an obvious contender for spookiest concept of the year. But physicists have struggled with action at a distance for centuries, and there are deep reasons why.

It all dates back to Newton. In Newton’s time, all of nature was expected to be mechanical. One object pushes another, which pushes another in turn, eventually explaining everything that every happens. And while people knew by that point that the planets were not circling around on literal crystal spheres, it was still hoped that their motion could be explained mechanically. The favored explanations of the time were vortices, whirlpools of celestial fluid that drove the planets around the Sun.

Newton changed all that. Not only did he set down a law of gravitation that didn’t use a fluid, he showed that no fluid could possibly replicate the planets’ motions. And while he remained agnostic about gravity’s cause, plenty of his contemporaries accused him of advocating “action at a distance”. People like Leibniz thought that a gravitational force without a mechanical cause would be superstitious nonsense, a betrayal of science’s understanding of the world in terms of matter.

For a while, Newton’s ideas won out. More and more, physicists became comfortable with explanations involving a force stretching out across empty space, using them for electricity and magnetism as these became more thoroughly understood.

Eventually, though, the tide began to shift back. Electricity and Magnetism were explained, not in terms of action at a distance, but in terms of a field that filled the intervening space. Eventually, gravity was too.

The difference may sound purely semantic, but it means more than you might think. These fields were restricted in an important way: when the field changed, it changed at one point, and the changes spread at a speed limited by the speed of light. A theory composed of such fields has a property called locality, the property that all interactions are fundamentally local, that is, they happen at one specific place and time.

Nowadays, we think of locality as one of the most fundamental principles in physics, on par with symmetry in space and time. And the reason why is that true action at a distance is quite a spooky concept.

Much of horror boils down to fear of the unknown. From what might lurk in the dark to the depths of the ocean, we fear that which we cannot know. And true action at a distance would mean that our knowledge might forever be incomplete. As long as everything is mediated by some field that changes at the speed of light, we can limit our search for causes. We can know that any change must be caused by something only a limited distance away, something we can potentially observe and understand. By contrast, true action at a distance would mean that forces from potentially anywhere in the universe could alter events here on Earth. We might never know the ultimate causes of what we observe; they might be stuck forever out of reach.

Some of you might be wondering, what about quantum mechanics? The phrase “spooky action at a distance” was famous because Einstein used it as an accusation against quantum entanglement, after all.

The key thing about quantum mechanics is that, as J. S. Bell showed, you can’t have locality…unless you throw out another property, called realism. Realism is the idea that quantum states have definite values for measurements before those measurements are taken. And while that sounds important, most people find getting rid of it much less scary than getting rid of locality. In a non-realistic world, at least we can still predict probabilities, even if we can’t observe certainties. In a non-local world, there might be aspects of physics that we just can’t learn. And that’s spooky.

When to Look under the Bed

Last week, blogged about a rather interesting experiment, designed to test the quantum properties of gravity. Normally, quantum gravity is essentially unobservable: quantum effects are typically only relevant for very small systems, where gravity is extremely weak. However, there has been a lot of progress in putting larger and larger systems into interesting quantum states, and a team of experimentalists has recently proposed a setup. The experiment wouldn’t have enough detail to, for example, distinguish between rival models of quantum gravity, but it would provide evidence as to whether or not gravity is quantum at all.

Lubos Motl, meanwhile, argues that such an experiment is utterly pointless, because there is no possible way that gravity could not be quantum. I won’t blame you if you don’t read his argument since it’s written in his trademark…aggressive…style, but the gist is that it’s really hard to make sense of the idea that there are non-quantum things in an otherwise quantum world. It causes all sorts of issues with pretty much every interpretation of quantum mechanics, and throws the differences between those interpretations into particularly harsh and obvious light. From this perspective, checking to see if gravity might not actually be quantum (an idea called semi-classical gravity) is a bit like checking for a monster under the bed.

You might find semi-classical gravity!

In general, I share Motl’s reservations about semi-classical gravity. As I mentioned back when journalists were touting the BICEP2 results as evidence of quantum gravity, the idea that gravity could not be quantum doesn’t really make much sense. (Incidentally, Hossenfelder makes a similar point in her post.)

All that said, sometimes in science it’s absolutely worth looking under the bed.

Take another unlikely possibility, that of cell phone radiation causing cancer. Things that cause cancer do it by messing with the molecular bonds in DNA. In order to mess with molecular bonds, you need high-frequency light. That’s how UV light from the sun can cause skin cancer. Cell phones emit microwaves, which are very low-frequency light. It’s what allows them to be useful inside of buildings, where normal light wouldn’t reach. It also means it’s impossible for them to cause cancer.

Nevertheless, if nobody had ever studied whether cell phones cause cancer, it would probably be worth at least one study. If that study came back positive, it would say something interesting, either about the study’s design or about other possible causes of cancer. If negative, the topic could be put to bed more convincingly. As it happens, those studies have been done, and overall confirm the expectations we have from basic science.

Another important point here is that experimentalists and theorists have different priorities, due to their different specializations. Theorists are interested in confirmation for particular theories: they want not just an unknown particle, but a gluino, and not just a gluino, but the gluino predicted by their particular model of supersymmetry. By contrast, experimentalists typically aren’t very interested in proving or disproving one theory or another. Rather, they look for general signals that indicate broad classes of new physics. For example, experimentalists might use the LHC to look for a leptoquark, a particle that allows quarks and leptons to interact, without caring what theory might produce them. Experimentalists are also very interested in improving their techniques. Much like theorists, a lot of interesting work in the field involves pushing the current state-of-the-art as far as it will go.

So, when should we look under the bed?

Well, if nobody has ever looked under this particular bed before, and if seeing something strange under this bed would at least be informative, and if looking under the bed serves as a proving ground for the latest in bed-spelunking technology, then yes, we should absolutely look under this bed.

Just don’t expect to see any monsters.

Is Everything Really Astonishingly Simple?

Neil Turok gave a talk last week, entitled The Astonishing Simplicity of Everything. In it, he argued that our current understanding of physics is really quite astonishingly simple, and that recent discoveries seem to be confirming this simplicity.

For the right sort of person, this can be a very uplifting message. The audience was spellbound. But a few of my friends were pretty thoroughly annoyed, so I thought I’d dedicate a post to explaining why.

Neil’s talk built up to showing this graphic, one of the masterpieces of Perimeter’s publications department:

Looked at in this way, the laws of physics look astonishingly simple. One equation, a few terms, each handily labeled with a famous name of some (occasionally a little hazy) relevance to the symbol in question.

In a sense, the world really is that simple. There are only a few kinds of laws that govern the universe, and the concepts behind them are really, deep down, very simple concepts. Neil adroitly explained some of the concepts behind quantum mechanics in his talk (here represented by the Schrodinger, Feynman, and Planck parts of the equation), and I have a certain fondness for the Maxwell-Yang-Mills part. The other parts represent different kinds of particles, and different ways they can interact.

While there are only a few different kinds of laws, though, that doesn’t mean the existing laws are simple. That nice, elegant equation hides 25 arbitrary parameters, hidden in the Maxwell-Yang-Mills, Dirac, Kobayashi-Masakawa, and Higgs parts. It also omits the cosmological constant, which fuels the expansion of the universe. And there are problems if you try to claim that the gravity part, for example, is complete.

When Neil mentions recent discoveries, he’s referring to the LHC not seeing new supersymmetric particles, to telescopes not seeing any unusual features in the cosmic microwave background. The theories that were being tested, supersymmetry and inflation, are in many ways more complicated than the Standard Model, adding new parameters without getting rid of old ones. But I think it’s a mistake to say that if these theories are ruled out, the world is astonishingly simple. These theories are attempts to explain unlikely features of the old parameters, or unlikely features of the universe we observe. Without them, we’ve still got those unlikely, awkward, complicated bits.

Of course, Neil doesn’t think the Standard Model is all there is either, and while he’s not a fan of inflation, he does have proposals he’s worked on that explain the same observations, proposals that are also beyond the current picture. More broadly, he’s not suggesting here that the universe is just what we’ve figured out so far and no more. Rather, he’s suggesting that new proposals ought to build on the astonishing simplicity of the universe, instead of adding complexity, that we need to go back to the conceptual drawing board rather than correcting the universe with more gears and wheels.

On the one hand, that’s Perimeter’s mission statement in a nutshell. Perimeter’s independent nature means that folks here can focus on deeper conceptual modifications to the laws of physics, rather than playing with the sorts of gears and wheels that people already know how to work with.

On the other hand, a lack of new evidence doesn’t do anyone any favors. It doesn’t show the way for supersymmetry, but it doesn’t point to any of the “deep conceptual” approaches either. And so for some people, Neil’s glee at the lack of new evidence feels less like admiration for the simplicity of the cosmos and more like that one guy in a group project who sits back chuckling while everyone else fails. You can perhaps understand why some people felt resentful.

Hexagon Functions III: Now with More Symmetry

I’ve got a new paper up this week.

It’s a continuation of my previous work, understanding collisions involving six particles in my favorite theory, N=4 super Yang-Mills.

This time, we’re pushing up the complexity, going from three “loops” to four. In the past, I could have impressed you with the number of pages the formulas I’m calculating take up (eight hundred pages for the three-loop formula from that first Hexagon Functions paper). Now, though, I don’t have that number: putting my four-loop formula into a pdf-making program just crashes the program. Instead, I’ll have to impress you with file sizes: 2.6 MB for the three-loop formula, 96 MB for the four-loop one.

Calculating such a formula sounds like a pretty big task, and it was, the first time. But things got a lot simpler after a chat I had at Amplitudes.

We calculate these things using an ansatz, a guess for what the final answer should look like. The more vague our guess, the more parameters we need to fix, and the more work we have in general. If we can guess more precisely, we can start with fewer parameters and things are a lot easier.

Often, more precise guesses come from understanding the symmetries of the problem. If we can know that the final answer must be the same after making some change, we can rule out a lot of possibilities.

Sometimes, these symmetries are known features of the answer, things that someone proved had to be correct. Other times, though, they’re just observations, things that have been true in the past and might be true again.

We started out using an observation from three loops. That got us pretty far, but we still had a lot of work to do: 808 parameters, to be fixed by other means. Fixing them took months of work, and throughout we hoped that there was some deeper reason behind the symmetries we observed.

Finally, at Amplitudes, I ran into fellow amplitudeologist Simon Caron-Huot and asked him if he knew the source of our observed symmetry. In just a few days he was able to link it to supersymmetry, giving us justification for our jury rigged trick. However, we figured out that his explanation went further than any of us expected. In the end, rather than 808 parameters we only really needed to consider 34.

Thirty-four options to consider. Thirty-four possible contributions to a ~100 MB file. That might not sound like a big deal, but compared to eight hundred and eight it’s a huge deal. More symmetry means easier calculations, meaning we can go further. At this point going to the next step in complexity, to five loops rather than four, might be well within reach.

The Theorist Exclusion Principle

There are a lot of people who think theoretical physics has gone off-track, though very few of them agree on exactly how. Some think that string theory as a whole is a waste of time, others that the field just needs to pay more attention to their preferred idea. Some think we aren’t paying enough attention to the big questions, or that we’re too focused on “safe” ideas like supersymmetry, even when they aren’t working out. Some think the field needs less focus on mathematics, while others think it needs even more.

Usually, people act on these opinions by writing strongly worded articles and blog posts. Sometimes, they have more power, and act with money, creating grants and prizes that only go to their preferred areas of research.

Let’s put the question of whether the field actually needs to change aside for the moment. Even if it does, I’m skeptical that this sort of thing will have any real effect. While grants and blogs may be very good at swaying experimentalists, theorists are likely to be harder to shift, due to what I’m going to call the Theorist Exclusion Principle.

The Pauli Exclusion Principle is a rule from quantum mechanics that states that two fermions (particles with half-integer spin) can’t occupy the same state. Fermions include electrons, quarks, protons…essentially, all the particles that make up matter. Many people learn about the Pauli Exclusion Principle first in a chemistry class, where it explains why electrons fall into different energy levels in atoms: once one energy level “fills up”, no more electrons can occupy the same state, and any additional electrons are “excluded” and must occupy a different energy level.

Those 1s electrons are such a clique!

In contrast, bosons (like photons, or the Higgs) can all occupy the same state. It’s what allows for things like lasers, and it’s why all the matter we’re used to is made out of fermions: because fermions can’t occupy the same state as each other, as you add more fermions the structures they form have to become more and more complicated.

Experimentalists are a little like bosons. While you can’t stuff two experimentalists into the same quantum state, you can get them working on very similar projects. They can form large collaborations, with each additional researcher making the experiment that much easier. They can replicate eachother’s work, making sure it was accurate. They can take some physical phenomenon and subject it to a battery of tests, so that someone is bound to learn something.

Theorists, on the other hand, are much more like fermions. In theory, there’s very little reason to work on something that someone else is already doing. Replication doesn’t mean very much: the purest theory involves mathematical proofs, where replication is essentially pointless. Theorists do form collaborations, but they don’t have the same need for armies of technicians and grad students that experimentalists do. With no physical objects to work on, there’s a limit to how much can be done pursuing one particular problem, and if there really are a lot of options they can be pursued by one person with a cluster.

Like fermions, then, theorists expand to fill the projects available. If an idea is viable, someone will probably work on it, and once they do, there isn’t much reason for someone else to do the same thing.

This makes theory a lot harder to influence than experiment. You can write the most beautiful thinkpiece possible to persuade theorists to study the deep questions of the universe, but if there aren’t any real calculations available nothing will change. Contrary to public perception, theoretical physicists aren’t paid to just sit around thinking all day: we calculate, compute, and publish, and if a topic doesn’t lend itself to that then we won’t get much mileage out of it. And no matter what you try to preferentially fund with grants, mostly you’ll just get people re-branding what they’re already doing, shifting a few superficial details to qualify.

Theorists won’t occupy the same states, so if you want to influence theorists you need to make sure there are open states where you’re trying to get them to go. Historically, theorists have shifted when new states have opened up: new data from experiment that needed a novel explanation, new mathematical concepts that opened up new types of calculations. You want there to be fewer string theorists, or more focus on the deep questions? Give us something concrete to do, and I guarantee you’ll get theorists flooding in.

Amplitudes Megapost

If you met me on a plane and asked me what I do, I’d probably lead with something like this:

“I come up with mathematical tricks to make particle physics calculations easier.”

People like me, who research these tricks, are sometimes known as Amplitudeologists. We studying scattering amplitudes, mathematical formulas used to calculate the probabilities of different things happening when sub-atomic particles collide.

Why do we want to make calculations easier? Because particle physics is hard!

More specifically, calculations in particle physics can be hard for three broad reasons: lots of loops, lots of legs, or more complicated theories.

Loops measure precision. They’re called loops because more complicated Feynman diagrams contain “loops” of particles, while the simplest, with no loops at all, are called “trees”. The more loops you include, the more precise your calculation becomes, but it also becomes more complicated.

Legs are the number of particles involved. If two particles collide and bounce off each other, then there are a total of four legs: two from the incoming particles, two from the outgoing ones. Calculations with more legs are almost always more complicated than calculations with fewer.

Most of the time, our end-goal is to calculate things that are relevant to the real world. Usually, this means QCD, or Quantum Chromodynamics, the theory of quarks and gluons. QCD is very complicated, though. Often, we work to hone our techniques on simpler theories first. N=4 super Yang-Mills has been called the simplest quantum field theory, particularly the further simplified, planar version. If you want a basic overview of it, check out the Handy Handbooks tab at the top of my blog. Often, progress in amplitudeology involves adapting tricks from planar N=4 super Yang-Mills to more complicated, and more realistic, theories.

I should point out that our goal in amplitudeology isn’t always to do more complicated calculations. Sometimes, it’s about doing a calculation we already know how to do, but in a way that’s more insightful. This lets us learn more about the theories we’re studying, as well as gaining insights about larger problems like the nature of space and time.

So what sorts of tricks do we use to do all this? Well, there are a few broad categories…

Generalized Unitarity

The prizewinning idea that started it all, generalized unitarity came out of the collaboration of Zvi Bern, Lance Dixon, and David Kosower, starting in the 90’s. The core of the idea is difficult to describe in a quick sentence, but it essentially boils down to noticing that, rather than thinking about every single multi-loop Feynman diagram independently, you can think of loop diagrams as what you get when you sew trees together.

This is a very powerful idea. These days, pretty much everyone who studies amplitudeology learns it, and it’s proven pivotal for a wide array of applications.

In planar N=4 super Yang-Mills it’s one of the techniques that can go to exceptionally high loop order, to six or seven loops. If you drop the “planar” condition, it’s still quite powerful. If you do things right, as Zvi Bern, John Joseph Carrasco, and Henrik Johansson found, you can get results in N=8 supergravity “for free”. This raises what has ended up being one of the big questions of our sub-field: does N=8 supergravity behave like most attempts at theories of quantum gravity, with pesky infinite results that we don’t know how to deal with, or does it behave like N=4 super Yang-Mills, which has no pesky infinities at all? Answering this question requires a dizzying seven-loop calculation, the mystique of which got me in to the field in the first place. Unfortunately, despite diligent efforts from Bern and collaborators, they’ve been stuck at four loops for quite some time. In the meantime they’ve been extending things in all the other amplitudes-directions: more legs, more complicated theories (in this case, supergravity with less supersymmetry), and more insight. Recently, it looks like they may have found a way around this hurdle, so the mystery at seven loops may not be so far away after all.

Generalized Unitarity is also one of the most powerful amplitudes tricks for real-world theories, in particular QCD. In this case, it’s main virtue is in legs, not loops, going up to seven-particles at one loop for practical, LHC-relevant calculations. There’s also a major effort to push this to two loops, with some success.

BCFW Recursion

If generalized unitarity was the trick that got experimentalists to sit up and take notice, BCFW is the one that got the attention of the pure theorists. In the mid-2000s Ruth Britto, Freddy Cachazo, and Bo Feng (later joined by theoretical physics superstar Ed Witten) figured out a way to build up tree amplitudes to any number of legs recursively for any theory, starting with three particles and working their way up. Their method was both fairly efficient and extremely insightful, and it’s another trick that’s made its way into every amplitudeologist’s arsenal. Further developments led to a recursive procedure that could work up to any number of loops in planar N=4 super Yang-Mills, which while not especially efficient did lead to…

The Positive Grassmannian, and the Amplituhedron

The work of Nima Arkani-Hamed, Jacob Bourjaily, Freddy Cachazo, Alexander Goncharov, Alexander Postnikov, and Jaroslav Trnka on the Positive Grassmannian (and more recently the Amplituhedron) has pushed the “more insight” direction impressively far. The Amplituhedron in particular captured the public’s imagination, as well as that of mathematicians, by packaging the all-loop amplitude into a particularly clean, mathematically meaningful form. Now they’re working on pushing this deep understanding to non-planar N=4 super Yang-Mills.

Integration Tricks

Generalized unitarity and the Amplituhedron have one thing in common: neither gives the full result. Calculating scattering amplitudes traditionally is a two-step process: first, add up all possible Feynman diagrams, then add up (integrate) all possible momenta. Generalized unitarity and the Amplituhedron let you skip the diagrams, but in both cases you still need to integrate. There’s a whole lore of integration techniques, from breaking things up into a basis of known “master” integrals (an example paper on this theme here), to attacking the integrations numerically via a process known as sector decomposition (one of the better programs that does this here). Higher-loop integrations are typically quite tough, even with these techniques.

Polylogarithms

These integrals will usually result in a type of mathematical functions called polylogarithms, or transcendental functions. Understanding these functions has led to an enormous amount of progress (and I’m not just saying that because it’s what I work on 😉 ).

It all started when Alexander Goncharov, Mark Spradlin, Cristian Vergu, and Anastasia Volovich figured out how to write a laboriously calculated seventeen-page two-loop six-particle amplitude in just two lines. To do this, they used mathematical properties of polylogarithms that were previously largely unknown to physicists. Their success inspired Lance Dixon, James Drummond, and Johannes Henn to use these methods to guess the correct answer at three loops, work that was completed with my involvement.

Since then, both groups have made a lot of progress. In general, Spradlin, Volovich, and collaborators have been pushing things farther in terms of legs and insight, while Dixon and collaborators have made progress at higher loops. So far we’ve gotten to four loops (here, plus unpublished work), while the others have proposals for any number of particles at two loops and substantial progress for seven particles at three loops.

All of this is still for planar N=4 super Yang-Mills. Using these tricks for more complicated theories is trickier. However, while you usually can’t just guess the answer like you can for N=4, a good understanding of the properties of polylogarithms can still take you quite far.

Integrability

Why did the polylogarithm folks start with six particles? Wouldn’t four or five have been easier?

As it turns out, four and five particle amplitudes are indeed easier, so much so that for planar N=4 super Yang-Mills they’re known up to any loop order. And while a number of elements went in to that result, one that really filled in the details was integrability.

Integrability is tough to describe in a short sentence, but essentially it involves describing highly symmetric systems all in one go, without having to use the step-by-step approximations of perturbation theory. For our purposes, this means bypassing the loop-by-loop perspective altogether.

Integrability is a substantial field in its own right, probably bigger than amplitudeology. There’s a lot going on, and only some of it touches on amplitudes-related topics. When it does, though, it’s quite impressive, with the flagship example being the work of Benjamin Basso, Amit Sever, and Pedro Vieira. They are able to compute amplitudes in planar N=4 super Yang-Mills for any and all loops, instead making an approximation based on the particle momenta. These days, they’re working on making their method more complete and robust, while building up understanding of other structures that might eventually allow them to say something about the non-planar case.

CHY and the Ambitwistor String

Ed Witten’s involvement in BCFW didn’t come completely out of left field. He had shown interest in N=4 super Yang-Mills earlier, with the invention of the twistor string. The twistor string calculates tree amplitudes in N=4 super Yang-Mills as the result of a string-theory-like framework. The advantage to such a framework is that, while normal quantum field theory involves large numbers of different diagrams, string theory only has one diagram “shape” for each loop.

This advantage has been thrust back into the spotlight recently via the work of Freddy Cachazo, Song He, and Ellis Ye Yuan. Their CHY formula works not just for N=4 super Yang-Mills, but for a wide (and growing) variety of other theories, allowing them to examine those theories’ properties in a particularly powerful way. Meanwhile, Lionel Mason and David Skinner have given the CHY formula a more solid theoretical grounding in the form of their ambitwistor string, which they have recently been able to generalize to a loop-level proposal.

Amplitudeology is a large and growing field, and there are definitely important people I haven’t mentioned. Some, like Henriette Elvang and Yu-tin Huang, have been involved with many different things over the years, so there wasn’t a clear place to put them. Others are part of the European community, where there’s a lot of work on string theory amplitudes and on pushing the boundaries of polylogarithms. Still others were left out simply because I ran out of room. I’ve only covered a small part of the field here, but I hope that small part gives you an idea of the richness of the whole.

Pentaquarks!

Earlier this week, the LHCb experiment at the Large Hadron Collider announced that, after painstakingly analyzing the data from earlier runs, they have decisive evidence of a previously unobserved particle: the pentaquark.

What’s a pentaquark? In simple terms, it’s five quarks stuck together. Stick two up quarks and a down quark together, and you get a proton. Stick two quarks together, you get a meson of some sort. Five, you get a pentaquark.

(In this case, if you’re curious: two up quarks, one down quark, one charm quark and one anti-charm quark.)

Artist’s Conception

Crucially, this means pentaquarks are not fundamental particles. Fundamental particles aren’t like species, but composite particles like pentaquarks are: they’re examples of a dizzying variety of combinations of an already-known set of basic building blocks.

So why is this discovery exciting? If we already knew that quarks existed, and we already knew the forces between them, shouldn’t we already know all about pentaquarks?

Well, not really. People definitely expected pentaquarks to exist, they were predicted fifty years ago. But their exact properties, or how likely they were to show up? Largely unknown.

Quantum field theory is hard, and this is especially true of QCD, the theory of quarks and gluons. We know the basic rules, but calculating their large-scale consequences, which composite particles we’re going to detect and which we won’t, is still largely out of our reach. We have to supplement first-principles calculations with experimental data, to take bits and pieces and approximations until we get something reasonably sensible.

This is an important point in general, not just for pentaquarks. Often, people get very excited about the idea of a “theory of everything”. At best, such a theory would tell us the fundamental rules that govern the universe. The thing is, we already know many of these rules, even if we don’t yet know all of them. What we can’t do, in general, is predict their full consequences. Most of physics, most of science in general, is about investigating these consequences, coming up with models for things we can’t dream of calculating from first principles, and it really does start as early as “what composite particles can you make out of quarks?”

Pentaquarks have been a long time coming, long enough that someone occasionally proposed a model that explained that they didn’t exist. There are still other exotic states of quarks and gluons out there, like glueballs, that have been predicted but not yet observed. It’s going to take time, effort, and data before we fully understand composite particles, even though we know the rules of QCD.

Got Branes on the Brain?

You’ve probably heard it said that string theory contains two types of strings: open, and closed. Closed strings are closed loops, like rubber bands. They give rise to gravity, and in superstring theories to supergravity. Open strings have loose ends, like a rubber band cut in half. They give us Yang-Mills forces, and super Yang-Mills for superstrings.

String theory has more than just strings, though. It also has branes.

Branes, short for membranes, are objects like strings but in other dimensions. The simplest to imagine is a two-dimensional membrane, like a sheet of paper. A three-dimensional membrane would fill all of 3D space, like an infinite cube of jello. Higher dimensional membranes also exist, up to string theory’s limit of nine spatial dimensions.

But you can keep imagining them as sheets of paper if you’d like.

So where did these branes come from? Why doesn’t string theory just have strings?

You might think we’re just trying to be as general as possible, including every possible dimension of object. Strangely enough, this isn’t actually what’s going on! As it turns out, branes can be in lower dimensions too: there are zero-dimensional branes that behave like particles, and one-dimensional branes that are similar to, but crucially not the same thing as, the strings we started out with! If we were just trying to get an object for every dimension we wouldn’t need one-dimensional branes, we’d already have strings!

(By the way, there are also “-1” dimensional branes, but that’s a somewhat more advanced topic.)

Instead, branes come from some strange properties of open strings.

I told you that the ends of open strings are “loose”, but that’s just loose language on my part. Mathematically, there are two options: the ends can be free to wander, or they can be fixed in place. If they’re free, they can move wherever they like with no resistance. If they’re fixed, any attempt to move them will just set them vibrating.

The thing is, you choose between these two options not just once, but once per dimension. You could have the end of the string free to move in two dimensions, but fixed in another, like a magnet was sticking it to some sort of 2D surface…like a brane.

Brane-worlds are dangerous places to live.

In mathematics, the fixed dimensions of end of the string are said to have Dirichlet boundary conditions, which is why this type of branes are called Dirichlet branes, or D-branes. In general, D-branes are things strings can end on. That’s why you can have D1-branes, that despite their string-like shape are different from actual strings: rather, they’re things strings can end on.

You might wonder whether we really need these things. Sure, they’re allowed mathematically, but is that really a good enough reason?

As it turns out, D-branes are not merely allowed in string theory, they are required, due to something called T-duality. I’ve talked about dualities before: they’re relationships between different theories that secretly compute the same thing. T-duality was one of the first-discovered dualities in string theory, and it involves relationships between strings wrapped around circular dimensions.

If a dimension is circular, then closed strings can either move around the circle, or wrap around it instead. As it turns out, a string moving around a small circle has the same energy as a string wrapped around a big circle, where here “small” and “big” are comparisons to the length of the string. It’s not just the energy, though: for every physical quantity, the two descriptions (big circle with strings traveling along it, small circle with strings wrapped around it) give the same answer: the two theories are dual.

If it works with closed strings, what about open strings?

Here something weird happens: if you perform the T-duality operation (switch between the small circle and the big one), then the ends of open strings switch from being free to being fixed! This means that even if we start out with no D-branes at all, our theory was equivalent to one with D-branes all along! No matter what we do, we can’t write down a theory that doesn’t have D-branes!

As it turns out, we could have seen this coming even without string theory, just by looking at (super)gravity.

Long before people saw astrophysical evidence for black holes, before they even figured out that stars could collapse, they worked out the black hole solution in general relativity. Without knowing anything about the sort of matter that could form a black hole, they could nevertheless calculate what space-time would look like around one.

In ten dimensional supergravity, you can do these same sorts of calculations. Instead of getting black holes, though, you get black branes. Rather than showing what space-time looks like around a high-mass point, they showed what it would look like around a higher dimensional, membrane-shaped object. And miraculously, they corresponded exactly to the D-branes that are supposed to be part of string theory!

So if we want string theory, or even supergravity, we’re stuck with D-branes. It’s a good thing we are, too, because D-branes are very useful. In the past, I’ve talked about how most of the fundamental forces of nature have multiple types of charge. One way for string theory to reproduce these multiple types of charge is with D-branes. If each open string is connected to two D-branes, it can behave like gluons, carrying a pair of charges. Since each end of the string is stuck to its respective brane, the charge corresponding to each brane must be conserved, just like charges in the real world.

D-branes aren’t one of the original assumptions of string theory, but they’re a large part of what makes string theory tick. M theory, string theory’s big brother, doesn’t have strings at all: just two- and five-dimensional branes. So be grateful for branes: they make the world a much more interesting place.