Tag Archives: amplitudes

Amplitudes 2016

I’m at Amplitudes this week, in Stockholm.

IMG_20160704_225049

The land of twilight at 11pm

Last year, I wrote a post giving a tour of the field. If I had to write it again this year most of the categories would be the same, but the achievements listed would advance in loops and legs, more complicated theories and more insight.

The ambitwistor string now goes to two loops, while my collaborators and I have pushed the polylogarithm program to five loops (dedicated post on that soon!) A decent number of techniques can now be applied to QCD, including a differential equation-based method that was used to find a four loop, three particle amplitude. Others tied together different approaches, found novel structures in string theory, or linked amplitudes techniques to physics from other disciplines. The talks have been going up on YouTube pretty quickly, due to diligent work by Nordita’s tech guy, so if you’re at all interested check it out!

What Does It Mean to Know the Answer?

My sub-field isn’t big on philosophical debates. We don’t tend to get hung up on how to measure an infinite universe, or in arguing about how to interpret quantum mechanics. Instead, we develop new calculation techniques, which tends to nicely sidestep all of that.

If there’s anything we do get philosophical about, though, any question with a little bit of ambiguity, it’s this: What counts as an analytic result?

“Analytic” here is in contrast to “numerical”. If all we need is a number and we don’t care if it’s slightly off, we can use numerical methods. We have a computer use some estimation trick, repeating steps over and over again until we have approximately the right answer.

“Analytic”, then, refers to everything else. When you want an analytic result, you want something exact. Most of the time, you don’t just want a single number: you want a function, one that can give you numbers for whichever situation you’re interested in.

It might sound like there’s no ambiguity there. If it’s a function, with sines and cosines and the like, then it’s clearly analytic. If you can only get numbers out through some approximation, it’s numerical. But as the following example shows, things can get a bit more complicated.

Suppose you’re trying to calculate something, and you find the answer is some messy integral. Still, you’ve simplified the integral enough that you can do numerical integration and get some approximate numbers out. What’s more, you can express the integral as an infinite series, so that any finite number of terms will get close to the correct result. Maybe you even know a few special cases, situations where you plug specific numbers in and you do get an exact answer.

It might sound like you only know the answer numerically. As it turns out, though, this is roughly how your computer handles sines and cosines.

When your computer tries to calculate a sine or a cosine, it doesn’t have access to the exact solution all of the time. It does have some special cases, but the rest of the time it’s using an infinite series, or some other numerical trick. Type in a random sine into your calculator and it will be just as approximate as if you did a numerical integration.

So what’s the real difference?

Rather than how we get numbers out, think about what else we know. We know how to take derivatives of sines, and how to integrate them. We know how to take limits, and series expansions. And we know their relations to other functions, including how to express them in terms of other things.

If you can do that with your integral, then you’ve probably got an analytic result. If you can’t, then you don’t.

What if you have only some of the requirements, but not the others? What if you can take derivatives, but don’t know all of the identities between your functions? What if you can do series expansions, but only in some limits? What if you can do all the above, but can’t get numbers out without a supercomputer?

That’s where the ambiguity sets in.

In the end, whether or not we have the full analytic answer is a matter of degree. The closer we can get to functions that mathematicians have studied and understood, the better grasp we have of our answer and the more “analytic” it is. In practice, we end up with a very pragmatic approach to knowledge: whether we know the answer depends entirely on what we can do with it.

Those Wacky 60’s Physicists

The 60’s were a weird time in academia. Psychologists were busy experimenting with LSD, seeing if they could convince people to electrocute each other, and otherwise doing the sorts of shenanigans that ended up saddling them with Institutional Review Boards so that nowadays they can’t even hand out surveys without a ten page form attesting that it won’t have adverse effects on pregnant women.

We don’t have IRBs in theoretical physics. We didn’t get quite as wacky as the psychologists did. But the 60’s were still a time of utopian dreams and experimentation, even in physics. We may not have done unethical experiments on people…but we did have the Analytic S-Matrix Program.

The Analytic S-Matrix Program was an attempt to rebuild quantum field theory from the ground up. The “S” in S-Matrix stands for “scattering”: the S-Matrix is an enormous matrix that tells you, for each set of incoming particles, the probability that they scatter into some new set of outgoing particles. Normally, this gets calculated piece by piece with what are called Feynman diagrams. The goal of the Analytic S-Matrix program was a loftier one: to derive the S-Matrix from first principles, without building it out of quantum field theory pieces. Without Feynman diagrams’ reliance on space and time, people like  Geoffrey Chew, Stanley Mandelstam, Tullio Regge, and Lev Landau hoped to reach a deeper understanding of fundamental physics.

If this sounds familiar, it should. Amplitudeologists like me view the physicists of the Analytic S-Matrix Program as our spiritual ancestors. Like us, they tried to skip the mess of Feynman diagrams, looking for mathematical tricks and unexpected symmetries to show them the way forward.

Unfortunately, they didn’t have the tools we do now. They didn’t understand the mathematical functions they needed, nor did they have novel ways of writing down their results like the amplituhedron. Instead, they had to work with what they knew, which in practice usually meant going back to Feynman diagrams.

Paradoxically then, much of the lasting impact of the Analytic S-Matrix Program has been on how we understand the results of Feynman diagram calculations. Just as psychologists learn about the Milgram experiment in school, we learn about Mandelstam variables and Regge trajectories. Recently, we’ve been digging up old concepts from those days and finding new applications, like the recent work on Landau singularities, or some as-yet unpublished work I’ve been doing.

Of course, this post wouldn’t be complete without mentioning the Analytic S-Matrix Program’s most illustrious child, String Theory. Some of the mathematics cooked up by the physicists of the 60’s, while dead ends for the problems they were trying to solve, ended up revealing a whole new world of potential.

The physicists of the 60’s were overly optimistic. Nevertheless, their work opened up questions that are still worth asking today. Much as psychologists can’t ignore what they got up to in the 60’s, it’s important for physicists to be aware of our history. You never know what you might dig up.

0521523362cvr.qxd (Page 1)

And as Levar Burton would say, you don’t have to take my word for it.

Four Gravitons and Some Wildly Irresponsible Amplitudes Predictions

My post on the “physics of decimals” a couple of weeks back caught physics blogger Luboš Motl’s attention, with predictable results. Mostly, this led to a rather unproductive debate about semantics, but he did bring up one thing that I think deserves some further clarification.

In my post, I asked you to imagine asking a genie for the full consequences of quantum field theory. Short of genie-based magic, is this the sort of thing I think it’s at all possible to know?

robinwilliams_aladdin

A Candle of Invocation? Sure, why not.

In a word, no.

The world is messy, not the sort of thing that tends to be described by neat exact solutions. That’s why we use approximations, and it’s why physicists can’t just step in and solve biology or psychology by deriving everything from first principles.

That said, the nice thing about approximations is that there’s often room for improvement. Sometimes this is quantitative, literally pushing to the next order of decimals, while sometimes it’s qualitative, viewing problems from a new perspective and attacking them from a new approach.

I’d like to give you some idea of the sorts of improvements I think are possible. I’ll focus on scattering amplitudes, since they’re my field. In order to be precise, I’ll be using technical terms here without much explanation; if you’re curious about something specific go ahead and ask in the comments. Finally, there are no implied time-scales here: I’ll be rating things based on whether I think they’re likely to eventually be understood, not on how long it will take us to get there.

Let’s begin with the most likely category:

Probably going to happen:

Mathematicians characterize the set of n-point cluster polylogarithms whose collinear limits are well-defined (n-1)-point cluster polylogarithms.

The seven-loop N=8 supergravity integrand is found, and the coefficient of its potential divergence is evaluated.

The dual Amplituhedron is found.

A general procedure is described for re-summing the L-loop coefficient of the Pentagon OPE for any L into a polylogarithmic form, at least at six points.

We figure out what the heck is up with the MHV-NMHV relation we found here.

Likely to happen, but there may be unforeseen complications:

N=8 supergravity is found to be finite at seven loops.

A symbol bootstrap becomes workable for QCD amplitudes at two or three loops, perhaps involving Landau singularities.

Something like a symbol bootstrap becomes workable for elliptic integrals, though it may only pass a “physicist” level of rigor.

Analogues to all of the work up to the actual Amplituhedron itself are performed for non-planar N=4 super Yang-Mills.

Quite possible, but I’m likely overoptimistic:

The space of n-point cluster polylogarithms whose collinear limits are well-defined (n-1)-point cluster polylogarithms that also obey the first entry condition and some number of final entry conditions turns out to be well-constrained enough that some all-loop all-point statements can be made, at least for MHV.

The enhanced cancellations observed in supergravity theories are understood, and used to provide a strong argument that N=8 supergravity is perturbatively finite.

All-multiplicity analytic QCD results at two loops, for at least the simpler helicity configurations.

The volume of the dual Amplituhedron is characterized by mathematicians and the connection to cluster polylogarithms is fully explored.

A non-planar Amplituhedron is found.

Less likely, but if all of the above happens I would not be all that surprised:

A way is found to double-copy the non-planar Amplituhedron to get an N=8 supergravity Amplituhedron.

The enhanced cancellations in N=8 supergravity turn out to be something “deep”: perhaps they are derivable from string theory, or provide a novel constraint on quantum gravity theories.

Various all-loop statements about the polylogarithms present in N=4 are used to make more restricted all-loop statements about QCD.

The Pentagon OPE is re-summed for finite coupling, if not into known functions than into a form that admits good numerics and various analytic manipulations. Alternatively, the sorts of functions that the Pentagon OPE can sum to are characterized and a bootstrap procedure becomes viable for them.

Irresponsible speculations, suited to public talks or grant applications:

The N=8 Amplituhedron leads to some sort of reformulation of space-time in a way that solves various quantum gravity paradoxes.

The sorts of mathematical objects found in the finite-coupling resummation of the Pentagon OPE lead to a revival of the original analytic S-matrix program, now with an actual chance to succeed.

Extremely unlikely:

Analytic all-loop QCD results.

Magical genie land:

Analytic finite coupling QCD results.

It Was Thirty Years Ago Yesterday, Parke and Taylor Taught the Band to Play…

Just a short post this week. I’m at MHV@30, a conference at Fermilab in honor of Parke and Taylor’s landmark paper from March 17, 1986. I don’t have time to write up an explanation of their work’s importance, but luckily I already have.

It’s my first time visiting Fermilab. They took us on a tour of their neutrino detectors 100m underground. Since we theorists don’t visit experiments very often, it was an unusual experience.

IMG_20160317_135147031

In case you wanted to know what a neutrino beam looks like, look at the target.

The fun thing about these kinds of national labs is the sheer variety of research, from the most abstract theory to the most grounded experiments, that spring from the same core goals. Physics almost always involves a diversity of viewpoints and interests, and that’s nowhere more obvious than here.

Symbology 101

I work with functions called polylogarithms. There’s a whole field of techniques out there for manipulating these functions, and for better or worse people often refer to them as symbology.

My plan for this post is to give a general feel for how symbology works: what we know how to do, and why. It’s going to be a lot more technical than my usual posts, so the lay reader may want to skip this one. At the same time, I’m not planning to go through anything rigorously. If you want that sort of thing there are plenty of good papers on the subject, here’s one of mine that covers the basics. Rather, I’m going to draw what I hope is an illuminating sketch of what it is we do.

Still here? Let’s start with an easy question.

What’s a log?

balch_park_hollow_log

Ok, besides one of these.

For our purposes, a log is what happens when you integrate dx/x.

\log x=\int \frac{dx}{x}

 Schematically, a polylog is then what happens when you iterate these integrations:

G=\int \frac{dx_1}{x_1} \int \frac{dx_2}{x_2}\ldots

The simplest thing you can get from this is of course just a product of logs. The next most simple thing is one of the classical polylogarithms. But in general, this is a much wider class of functions, known as multiple, or Goncharov, polylogarithms.

The number of integrations is the transcendental weight. Naively, you’d expect an L-loop Feynman integral in four dimensions to give you something with transcendental weight 4L. In practice, that’s not the case: some of the momentum integrations end up just giving delta functions, so in the end an L-loop amplitude has transcendental weight 2L.

In most theories, you get a mix of functions: some with weight 2L, some with weight 2L-1, etc., all the way down to rational functions. N=4 super Yang-Mills is special: there, everything is at the maximum transcendental weight. In either case, though, being able to manipulate transcendental functions is very useful, and the symbol is one of the simplest ways to do so.

The core idea of the symbol is pretty easy to state, though it takes a bit more technology to state it rigorously. Essentially, we take our schematic polylog from above, and just list the logs:

\mathcal{S}(G)=\ldots\otimes x_2\otimes x_1

(Here I have switched the order in order to agree with standard conventions.)

What does that do? Well, it reminds us that these aren’t just some weird functions we don’t understand: they’re collections of logs, and we can treat them like collections of logs.

In particular, we can do this with logs,

\log (x y)=\log x+\log y

so we can do it with symbols as well:

x_1\otimes x y\otimes x_3=x_1\otimes x \otimes x_3+x_1\otimes y\otimes x_3

Similarly, we can always get rid of unwelcome exponents, like so:

\log (x^n)=n\log x

x_1\otimes x^n\otimes x_3=n( x_1\otimes x \otimes x_3)

This means that, in general, we can always factorize any polynomial or rational function that appears in a symbol. As such, we often express symbols in terms of some fixed symbol alphabet, a basis of rational functions that can be multiplied to get any symbol entry in the function we’re working with. In general, it’s a lot easier to calculate amplitudes when we know the symbol alphabet beforehand. For six-particle amplitudes in N=4 super Yang-Mills, the symbol alphabet contains just nine “letters”, which makes it particularly easy to work with.

That’s arguably the core of symbol methods. It’s how Spradlin and Volovich managed to get a seventeen-page expression down to two lines. Express a symbol in the right alphabet, and it tends to look a lot more simple. And once you know the right alphabet, it’s pretty straightforward to build an ansatz with it and constrain it until you get a candidate function for whatever you’re interested in.

There’s more technical detail I could give here: how to tell whether a symbol actually corresponds to a function, how to take limits and do series expansions and take derivatives and discontinuities…but I’m not sure whether anyone reading this would be interested.

As-is, I’ll just mention that the symbol is only part of the story. In particular, it’s a special case of something called a coproduct, which breaks up polylogarithms into various chunks. Break them down fully until each chunk is just an individual log, and you get the symbol. Break them into larger chunks, and you get other components of the coproduct, consisting of tensor products of polylogarithms with lower transcendental weight. These larger chunks mean we can capture as much of a function’s behavior as we like, while still taking advantage of these sorts of tricks. While in older papers you might have seen mention of “beyond-the-symbol” terms that the symbol couldn’t capture, this doesn’t tend to be a problem these days.

PSI Winter School

I’m at the Perimeter Scholars International Winter School this week. Perimeter Scholars International is Perimeter’s one-of-a-kind master’s program in theoretical physics, that jams the basics of theoretical physics into a one-year curriculum. We’ve got students from all over the world, including plenty of places that don’t get any snow at all. As such, it was decided that the students need to spend a week somewhere with even more snow than Waterloo: Musoka, Ontario.

IMG_20160127_152710

A place that occasionally manages to be this photogenic

This isn’t really a break for them, though, which is where I come in. The students have been organized into groups, and each group is working on a project. My group’s project is related to the work of integrability master Pedro Vieira. He and his collaborators came up with a way to calculate scattering amplitudes in N=4 super Yang-Mills without the usual process of loop-by-loop approximations. However, this method comes at a price: a new approximation, this time to low energy. This approximation is step-by-step, like loops, but in a different direction. It’s called the Pentagon Operator Product Expansion, or POPE for short.

IMG_20160127_123210

Approach the POPE, and receive a blessing

What we’re trying to do is go back and add up all of the step-by-step terms in the approximation, to see if we can match to the old expansion in loops. One of Pedro’s students recently managed to do this for the first approximation (“tree” diagrams), and the group here at the Winter School is trying to use her (still unpublished) work as a jumping-off point to get to the first loop. Time will tell whether we’ll succeed…but we’re making progress, and the students are learning a lot.

Amplitudes for the New Year

Ah, the new year, time of new year’s resolutions. While some people resolve to go to the gym or take up online dating, physicists resolve to finally get that paper out.

At least, that’s the impression I get, given the number of papers posted to arXiv in the last month. Since a lot of them were amplitudes-related, I figured I’d go over some highlights.

Everyone once in a while people ask me for the latest news on the amplituhedron. While I don’t know what Nima is working on right now, I can point to what others have been doing. Zvi Bern, Jaroslav Trnka, and collaborators have continued to make progress towards generalizing the amplituhedron to non-planar amplitudes. Meanwhile, a group in Europe has been working on solving an issue I’ve glossed over to some extent. While the amplituhedron is often described as calculating an amplitude as the volume of a geometrical object, in fact there is a somewhat more indirect procedure involved in going from the geometrical object to the amplitude. It would be much simpler if the amplitude was actually the volume of some (different) geometrical object, and that’s what these folks are working towards. Finally, Daniele Galloni has made progress on solving a technical issue: the amplituhedron gives a mathematical recipe for the amplitude, but it doesn’t tell you how to carry out that recipe, and Galloni provides an algorithm for part of this process.

With this new algorithm, is the amplituhedron finally as efficient as older methods? Typically, the way to show that is to do a calculation with the amplituhedron that wasn’t possible before. It doesn’t look like that’s happening soon though, as Jake Bourjaily and collaborators compute an eight-loop integrand using one of the more successful of the older methods. Their paper provides a good answer to the perennial question, “why more loops?” What they find is that some of the assumptions that people made at lower loops fail to hold at this high loop order, and it becomes increasingly important to keep track of exactly how far your symmetries can take you.

Back when I visited Brown, I talked to folks there about some ongoing work. Now that they’ve published, I can talk about it. A while back, Juan Maldacena resurrected an old technique of Landau’s to solve a problem in AdS/CFT. In that paper, he suggested that Landau’s trick might help prove some of the impressive simplifications in N=4 super Yang-Mills that underlie my work and the work of those at Brown. In their new paper, the Brown group finds that, while useful, Landau’s trick doesn’t seem to fully explain the simplicity they’ve discovered. To get a little partisan, I have to say that this was largely the result I expected, and that it felt a bit condescending for Maldacena to assume that an old trick like that from the Feynman diagram era could really be enough to explain one of the big discoveries of amplitudeology.

There was also a paper by Freddy Cachazo and collaborators on an interesting trick to extend their CHY string to one-loop, and one by Bo Feng and collaborators on an intriguing new method called Q-cuts that I will probably say more about in future, but I’ll sign off for now. I’ve got my own new years’ physics resolutions, and I ought to get back to work!

The “Lies to Children” Model of Science Communication, and The “Amplitudes Are Weird” Model of Amplitudes

Let me tell you a secret.

Scattering amplitudes in N=4 super Yang-Mills don’t actually make sense.

Scattering amplitudes calculate the probability that particles “scatter”: coming in from far away, interacting in some fashion, and producing new particles that travel far away in turn. N=4 super Yang-Mills is my favorite theory to work with: a highly symmetric version of the theory that describes the strong nuclear force. In particular, N=4 super Yang-Mills has conformal symmetry: if you re-scale everything larger or smaller, you should end up with the same predictions.

You might already see the contradiction here: scattering amplitudes talk about particles coming in from very far away…but due to conformal symmetry, “far away” doesn’t mean anything, since we can always re-scale it until it’s not far away anymore!

So when I say that I study scattering amplitudes in N=4 super Yang-Mills, am I lying?

Well…yes. But it’s a useful type of lie.

There’s a concept in science writing called “lies to children”, first popularized in a fantasy novel.

the-science-of-discworld-1

This one.

When you explain science to the public, it’s almost always impossible to explain everything accurately. So much background is needed to really understand most of modern science that conveying even a fraction of it would bore the average audience to tears. Instead, you need to simplify, to skip steps, and even (to be honest) to lie.

The important thing to realize here is that “lies to children” aren’t meant to mislead. Rather, they’re chosen in such a way that they give roughly the right impression, even as they leave important details out. When they told you in school that energy is always conserved, that was a lie: energy is a consequence of symmetry in time, and when that symmetry is broken energy doesn’t have to be conserved. But “energy is conserved” is a useful enough rule that lets you understand most of everyday life.

In this case, the “lie” that we’re calculating scattering amplitudes is fairly close to the truth. We’re using the same methods that people use to calculate scattering amplitudes in theories where they do make sense, like QCD. For a while, people thought these scattering amplitudes would have to be zero, since anything else “wouldn’t make sense”…but in practice, we found they were remarkably similar to scattering amplitudes in other theories. Now, we have more rigorous definitions for what we’re calculating that avoid this problem, involving objects called polygonal Wilson loops.

This illustrates another principle, one that hasn’t (yet) been popularized by a fantasy novel. I’d like to call it the “amplitudes are weird” principle. Time and again we amplitudes-folks will do a calculation that doesn’t really make sense, find unexpected structure, and go back to figure out what that structure actually means. It’s been one of the defining traits of the field, and we’ve got a pretty good track record with it.

A couple of weeks back, Lance Dixon gave an interview for the SLAC website, talking about his work on quantum gravity. This was immediately jumped on by Peter Woit and Lubos Motl as ammo for the long-simmering string wars. To one extent or another, both tried to read scientific arguments into the piece. This is in general a mistake: it is in the nature of a popularization piece to contain some volume of lies-to-children, and reading a piece aimed at a lower audience can be just as confusing as reading one aimed at a higher audience.

In the remainder of this post, I’ll try to explain what Lance was talking about in a slightly higher-level way. There will still be lies-to-children involved, this is a popularization blog after all. But I should be able to clear up a few misunderstandings. Lubos probably still won’t agree with the resulting argument, but it isn’t the self-evidently wrong one he seems to think it is.

Lance Dixon has done a lot of work on quantum gravity. Those of you who’ve read my old posts might remember that quantum gravity is not so difficult in principle: general relativity naturally leads you to particles called gravitons, which can be treated just like other particles. The catch is that the theory that you get by doing this fails to be predictive: one reason why is that you get an infinite number of erroneous infinite results, which have to be papered over with an infinite number of arbitrary constants.

Working with these non-predictive theories, however, can still yield interesting results. In the article, Lance mentions the work of Bern, Carrasco, and Johansson. BCJ (as they are abbreviated) have found that calculating a gravity amplitude often just amounts to calculating a (much easier to find) Yang-Mills amplitude, and then squaring the right parts. This was originally found in the context of string theory by another three-letter group, Kawai, Lewellen, and Tye (or KLT). In string theory, it’s particularly easy to see how this works, as it’s a basic feature of how string theory represents gravity. However, the string theory relations don’t tell the whole story: in particular, they only show that this squaring procedure makes sense on a classical level. Once quantum corrections come in, there’s no known reason why this squaring trick should continue to work in non-string theories, and yet so far it has. It would be great if we had a good argument why this trick should continue to work, a proof based on string theory or otherwise: for one, it would allow us to be much more confident that our hard work trying to apply this trick will pay off! But at the moment, this falls solidly under the “amplitudes are weird” principle.

Using this trick, BCJ and collaborators (frequently including Lance Dixon) have been calculating amplitudes in N=8 supergravity, a highly symmetric version of those naive, non-predictive gravity theories. For this particular, theory, the theory you “square” for the above trick is N=4 super Yang-Mills. N=4 super Yang-Mills is special for a number of reasons, but one is that the sorts of infinite results that lose you predictive power in most other quantum field theories never come up. Remarkably, the same appears to be true of N=8 supergravity. We’re still not sure, the relevant calculation is still a bit beyond what we’re capable of. But in example after example, N=8 supergravity seems to be behaving similarly to N=4 super Yang-Mills, and not like people would have predicted from its gravitational nature. Once again, amplitudes are weird, in a way that string theory helped us discover but by no means conclusively predicted.

If N=8 supergravity doesn’t lose predictive power in this way, does that mean it could describe our world?

In a word, no. I’m not claiming that, and Lance isn’t claiming that. N=8 supergravity simply doesn’t have the right sorts of freedom to give you something like the real world, no matter how you twist it. You need a broader toolset (string theory generally) to get something realistic. The reason why we’re interested in N=8 supergravity is not because it’s a candidate for a real-world theory of quantum gravity. Rather, it’s because it tells us something about where the sorts of dangerous infinities that appear in quantum gravity theories really come from.

That’s what’s going on in the more recent paper that Lance mentioned. There, they’re not working with a supersymmetric theory, but with the naive theory you’d get from just trying to do quantum gravity based on Einstein’s equations. What they found was that the infinity you get is in a certain sense arbitrary. You can’t get rid of it, but you can shift it around (infinity times some adjustable constant 😉 ) by changing the theory in ways that aren’t physically meaningful. What this suggests is that, in a sense that hadn’t been previously appreciated, the infinite results naive gravity theories give you are arbitrary.

The inevitable question, though, is why would anyone muck around with this sort of thing when they could just use string theory? String theory never has any of these extra infinities, that’s one of its most important selling points. If we already have a perfectly good theory of quantum gravity, why mess with wrong ones?

Here, Lance’s answer dips into lies-to-children territory. In particular, Lance brings up the landscape problem: the fact that there are 10^500 configurations of string theory that might loosely resemble our world, and no clear way to sift through them to make predictions about the one we actually live in.

This is a real problem, but I wouldn’t think of it as the primary motivation here. Rather, it gets at a story people have heard before while giving the feeling of a broader issue: that string theory feels excessive.

princess_diana_wedding_dress

Why does this have a Wikipedia article?

Think of string theory like an enormous piece of fabric, and quantum gravity like a dress. You can definitely wrap that fabric around, pin it in the right places, and get a dress. You can in fact get any number of dresses, elaborate trains and frilly togas and all sorts of things. You have to do something with the extra material, though, find some tricky but not impossible stitching that keeps it out of the way, and you have a fair number of choices of how to do this.

From this perspective, naive quantum gravity theories are things that don’t qualify as dresses at all, scarves and socks and so forth. You can try stretching them, but it’s going to be pretty obvious you’re not really wearing a dress.

What we amplitudes-folks are looking for is more like a pencil skirt. We’re trying to figure out the minimal theory that covers the divergences, the minimal dress that preserves modesty. It would be a dress that fits the form underneath it, so we need to understand that form: the infinities that quantum gravity “wants” to give rise to, and what it takes to cancel them out. A pencil skirt is still inconvenient, it’s hard to sit down for example, something that can be solved by adding extra material that allows it to bend more. Similarly, fixing these infinities is unlikely to be the full story, there are things called non-perturbative effects that probably won’t be cured. But finding the minimal pencil skirt is still going to tell us something that just pinning a vast stretch of fabric wouldn’t.

This is where “amplitudes are weird” comes in in full force. We’ve observed, repeatedly, that amplitudes in gravity theories have unexpected properties, traits that still aren’t straightforwardly explicable from the perspective of string theory. In our line of work, that’s usually a sign that we’re on the right track. If you’re a fan of the amplituhedron, the project here is along very similar lines: both are taking the results of plodding, not especially deep loop-by-loop calculations, observing novel simplifications, and asking the inevitable question: what does this mean?

That far-term perspective, looking off into the distance at possible insights about space and time, isn’t my style. (It isn’t usually Lance’s either.) But for the times that you want to tell that kind of story…well, this isn’t that outlandish of a story to tell. And unless your primary concern is whether a piece gives succor to the Woits of the world, it shouldn’t be an objectionable one.

Map Your Dead Ends

I’m at Brown this week, where I’ve been chatting with Mark Spradlin and Anastasia Volovich, two of the founding figures of my particular branch of amplitudeology. Back in 2010 they figured out how to turn this seventeen-page two-loop amplitude:

Why yes, this is one equation that covers seventeen pages. You're lucky I didn't post the eight-hundred page one.

into a formula that just takes up two lines:gsvvformThis got everyone very excited, it inspired some of my collaborators to do work that would eventually give rise to the Hexagon Functions, my main research project for the past few years.

Unfortunately, when we tried to push this to higher loops, we didn’t get the sort of nice, clean-looking formulas that the Brown team did. Each “loop” is an additional layer of complexity, a series of approximations that get closer to the exact result. And so far, our answers look more like that first image than the second: hundreds of pages with no clear simplifications in sight.

At the time, people wondered whether some simple formula might be enough. As it turns out, you can write down a formula similar to the one found by Spradlin and Volovich, generalized to a higher number of loops. It’s clean, it’s symmetric, it makes sense…and it’s not the right answer.

That happens in science a lot more often than science fans might expect. When you hear about this sort of thing in the news, it always works: someone suggests a nice, simple answer, and it turns out to be correct, and everyone goes home happy. But for every nice simple guess that works, there are dozens that don’t: promising ideas that just lead to dead ends.

One of the postdocs here at Brown worked on this “wrong” formula, and while chatting with him here he asked a very interesting question: why is it wrong? Sure, we know that it’s wrong, we can check that it’s wrong…but what, specifically, is missing? Is it “part” of the right answer in some sense, with some predictable corrections?

As it turns out, this is a very interesting question! We’ve been looking into it, and the “wrong” answer has some interesting relationships with some of our Hexagon Functions. It may have been a “dead end”, but it still could turn out to be a useful one.

A good physics advisor will tell their students to document their work. This doesn’t just mean taking notes: most theoretical physicists will maintain files, in standard journal article format, with partial results. One reason to do this is that, if things work out, you’ll have some of your paper already written. But if something doesn’t work out, you’ll end up with a pdf on your hard drive carefully explaining an idea that didn’t quite work. Physicists often end up with dozens of these files squirreled away on their computers. Put together, they’re a map: a map of dead ends.

There’s a handy thing about having a map: it lets you retrace your steps. Any one of these paths may lead nowhere, but each one will contain some substantive work. And years later, often enough, you end up needing some of it: some piece of the calculation, some old idea. You follow the map, dig it up…and build it into something new.