Tag Archives: theoretical physics

Those Wacky 60’s Physicists

The 60’s were a weird time in academia. Psychologists were busy experimenting with LSD, seeing if they could convince people to electrocute each other, and otherwise doing the sorts of shenanigans that ended up saddling them with Institutional Review Boards so that nowadays they can’t even hand out surveys without a ten page form attesting that it won’t have adverse effects on pregnant women.

We don’t have IRBs in theoretical physics. We didn’t get quite as wacky as the psychologists did. But the 60’s were still a time of utopian dreams and experimentation, even in physics. We may not have done unethical experiments on people…but we did have the Analytic S-Matrix Program.

The Analytic S-Matrix Program was an attempt to rebuild quantum field theory from the ground up. The “S” in S-Matrix stands for “scattering”: the S-Matrix is an enormous matrix that tells you, for each set of incoming particles, the probability that they scatter into some new set of outgoing particles. Normally, this gets calculated piece by piece with what are called Feynman diagrams. The goal of the Analytic S-Matrix program was a loftier one: to derive the S-Matrix from first principles, without building it out of quantum field theory pieces. Without Feynman diagrams’ reliance on space and time, people like  Geoffrey Chew, Stanley Mandelstam, Tullio Regge, and Lev Landau hoped to reach a deeper understanding of fundamental physics.

If this sounds familiar, it should. Amplitudeologists like me view the physicists of the Analytic S-Matrix Program as our spiritual ancestors. Like us, they tried to skip the mess of Feynman diagrams, looking for mathematical tricks and unexpected symmetries to show them the way forward.

Unfortunately, they didn’t have the tools we do now. They didn’t understand the mathematical functions they needed, nor did they have novel ways of writing down their results like the amplituhedron. Instead, they had to work with what they knew, which in practice usually meant going back to Feynman diagrams.

Paradoxically then, much of the lasting impact of the Analytic S-Matrix Program has been on how we understand the results of Feynman diagram calculations. Just as psychologists learn about the Milgram experiment in school, we learn about Mandelstam variables and Regge trajectories. Recently, we’ve been digging up old concepts from those days and finding new applications, like the recent work on Landau singularities, or some as-yet unpublished work I’ve been doing.

Of course, this post wouldn’t be complete without mentioning the Analytic S-Matrix Program’s most illustrious child, String Theory. Some of the mathematics cooked up by the physicists of the 60’s, while dead ends for the problems they were trying to solve, ended up revealing a whole new world of potential.

The physicists of the 60’s were overly optimistic. Nevertheless, their work opened up questions that are still worth asking today. Much as psychologists can’t ignore what they got up to in the 60’s, it’s important for physicists to be aware of our history. You never know what you might dig up.

0521523362cvr.qxd (Page 1)

And as Levar Burton would say, you don’t have to take my word for it.

A Collider’s Eye View

When it detected the Higgs, what did the LHC see, exactly?

cern-1304107-02-thumb

What do you see with your detector-eyes, CMS?

The first problem is that the Higgs, like most particles produced in particle colliders, is unstable. In a very short amount of time the Higgs transforms into two or more lighter particles. Often, these particles will decay in turn, possibly many more times.  So when the LHC sees a Higgs boson, it doesn’t really “see the Higgs”.

The second problem is that you can’t “see” the lighter particles either. They’re much too small for that. Instead, the LHC has to measure their properties.

Does the particle have a charge? Then its path will curve in a magnetic field, and it will send electrical signals in silicon. So the LHC can “see” charge.

Can the particle be stopped, absorbed by some material? Getting absorbed releases energy, lighting up a detector. So the LHC can “see” energy, and what it takes for a particle to be absorbed.

vvvvv

Diagram of a collider’s “eye”

And that’s…pretty much it. When the LHC “sees” the Higgs, what it sees is a set of tracks in a magnetic field, indicating charge, and energy in its detectors, caused by absorption at different points. Everything else has to be inferred: what exactly the particles were, where they decayed, and from what. Some of it can be figured out in real-time, some is only understood later once we can add up everything and do statistics.

On the face of it, this sounds about as impossible as astrophysics. Like astrophysics, it works in part because what the colliders see is not the whole story. The strong force has to both be consistent with our observations of hadrons, and with nuclear physics. Neutrinos aren’t just mysterious missing energy that we can’t track, they’re an important part of cosmology. And so on.

So in the sense of that massive, interconnected web of ideas, the LHC sees the Higgs. It sees patterns of charges and energies, binned into histograms and analyzed with statistics and cross-checked, implicitly or explicitly, against all of the rest of physics at every scale we know. All of that, together, is the collider’s eye view of the universe.

Source Your Common Sense

When I wrote that post on crackpots, one of my inspirations was a particularly annoying Twitter conversation. The guy I was talking to had convinced himself that general relativity was a mistake. He was especially pissed off by the fact that, in GR, energy is not always conserved. Screw Einstein, energy conservation is just common sense! Right?

Think a little bit about why you believe in energy conservation. Is it because you run into a lot of energy in your day-to-day life, and it’s always been conserved? Did you grow up around something that was obviously energy? Or maybe someone had to explain it to you?

Teacher Pointing at Map of World

Maybe you learned about it…from a physics teacher?

A lot of the time, things that seem obvious only got that way because you were taught them. “Energy” isn’t an intuitive concept, however much it’s misused that way. It’s something defined by physicists because it solves a particular role, a consequence of symmetries in nature. When you learn about energy conservation in school, that’s because it’s one of the simpler ways to explain a much bigger concept, so you shouldn’t be surprised if there are some inaccuracies. If you know where your “common sense” is coming from, you can anticipate when and how it might go awry.

Similarly, if, like one of the commenters on my crackpot post, you’re uncomfortable with countable and uncountable infinities, remember that infinity isn’t “common sense” either. It’s something you learned about in a math class, from a math teacher. And just like energy conservation, it’s a simplification of a more precise concept, with epsilons and deltas and all that jazz.

It’s not possible to teach all the nuances of every topic, so naturally most people will hear a partial story. What’s important is to recognize that you heard a partial story, and not enshrine it as “common sense” when the real story comes knocking.

Don’t physicists use common sense, though? What about “physical intuition”?

Physical intuition has a lot of mystique behind it, and is often described as what separates us from the mathematicians. As such, different people mean different things by it…but under no circumstances should it be confused with pure “common sense”. Physical intuition uses analogy and experience. It involves seeing a system and anticipating the sorts of things you can do with it, like playing a game and assuming there’ll be a save button. In order for these sorts of analogies to work, they generally aren’t built around everyday objects or experiences. Instead, they use physical systems that are “similar” to the one under scrutiny in important ways, while being better understood in others. Crucially, physical intuition involves working in context. It’s not just uncritical acceptance of what one would naively expect.

So when your common sense is tingling, see if you can provide a source. Is that source relevant, experience with a similar situation? Or is it in fact a half-remembered class from high school?

Starshot: The Right Kind of Longshot

On Tuesday, Yuri Milner and Stephen Hawking announced Starshot, a $100 million dollar research initiative. The goal is to lay the groundwork for a very ambitious, but surprisingly plausible project: sending probes to the nearest star, Alpha Centauri. Their idea is to have hundreds of ultra-light probes, each with a reflective sail a few meters in diameter. By aiming an extremely powerful laser at these sails, it should be possible to accelerate the probes up to around a fifth of the speed of light, enough to make the trip in twenty years. Here’s the most complete article I’ve found on the topic.

I can’t comment on the engineering side of the project. The impression I get is that nothing they’re proposing is known to be impossible, but there are a lot of “ifs” along the way that might scupper things. What I can comment on is the story.

Milner and Hawking have both put quite a bit of effort recently into what essentially amounts to telling stories. Milner’s Breakthrough Prizes involve giving awards of $3 million to prominent theoretical physicists (and, more recently, mathematicians). Quite a few of my fellow theorists have criticized these prizes, arguing that the money would be better spent in a grant program like that of the Simons Foundation. While that would likely be better for science, the Breakthrough Prize isn’t really about that. Instead, it’s about telling a story: a story in which progress in theoretical physics is exalted in a public, Nobel-sized way.

Similarly, Hawking’s occasional pronouncements about aliens or AI aren’t science per se, and the media has a tendency to talk about his contributions to ongoing scientific debates out of proportion to their importance. Both of these things, though, contribute to the story of Hawking: a mascot for physics, someone to carry Einstein’s role of the most recognizable genius in the world. Hawking Inc. is about a role as much as it is about a man.

In calling Hawking and Milner’s activity “stories”, I’m not dismissing them. Stories can be important. And the story told by Starshot is a particularly important one.

Cosmology isn’t just a scientific subject, it contributes to how people see themselves. Here I don’t just mean cosmology the field, but cosmology in the broader sense of our understanding of the universe and our place in it.

A while back, I read a book called The View from the Center of the Universe. The book starts by describing the worldviews of the ancients, cosmologies in which they really did think of themselves as the center of the universe. It then suggests that this played an important role: that this kind of view of the world, in which humans have a place in the cosmos, is important to how we view ourselves. The rest of the book then attempts to construct this sort of mythological understanding out of the modern cosmological picture, with some success.

One thing the book doesn’t discuss very much, though, is the future. We care about our place in the universe not just because we want to know where we came from, but because we want to have some idea of where we’re going. We want to contribute to a greater goal, to see ourselves making progress towards something important and vast and different. That’s why so many religions have not just cosmologies, but eschatologies, why people envision armageddons and raptures.

Starshot places the future in our sight in a way that few other things do. Humanity’s spread among the stars seems like something so far distant that nothing we do now could matter to it. What Starshot does is give us something concrete, a conceptual stepping-stone that can link people in to the broader narrative. Right now, people can work on advanced laser technology and optics, work on making smaller chips and lighter materials, work that would be useful and worth funding regardless of whether it was going to lead to Alpha Centauri. But because of Starshot, we can view that work as the near-term embodiment of humanity’s interstellar destiny.

That combination, bridging the gap between the distant future and our concrete present, is the kind of story people need right now. And so for once, I think Milner’s storytelling is doing exactly what it should.

GUTs vs ToEs: What Are We Unifying Here?

“Grand Unified Theory” and “Theory of Everything” may sound like meaningless grandiose titles, but they mean very different things.

In particular, Grand Unified Theory, or GUT, is a technical term, referring to a specific way to unify three of the fundamental interactions: electromagnetism, the weak force, and the strong force.

blausen_0817_smallintestine_anatomy

In contrast, guts unify the two fundamental intestines.

Those three forces are called Yang-Mills forces, and they can all be described in the same basic way. In particular, each has a strength (the coupling constant) and a mathematical structure that determines how it interacts with itself, called a group.

The core idea of a GUT, then, is pretty simple: to unite the three Yang-Mills forces, they need to have the same strength (the same coupling constant) and be part of the same group.

But wait! (You say, still annoyed at the pun in the above caption.) These forces don’t have the same strength at all! One of them’s strong, one of them’s weak, and one of them is electromagnetic!

As it turns out, this isn’t as much of a problem as it seems. While the three Yang-Mills forces seem to have very different strengths on an everyday scale, that’s not true at very high energies. Let’s steal a plot from Sweden’s Royal Institute of Technology:

running

Why Sweden? Why not!

What’s going on in this plot?

Here, each \alpha represents the strength of a fundamental force. As the force gets stronger, \alpha gets bigger (and so \alpha^{-1} gets smaller). The variable on the x-axis is the energy scale. The grey lines represent a world without supersymmetry, while the black lines show the world in a supersymmetric model.

So based on this plot, it looks like the strengths of the fundamental forces change based on the energy scale. That’s true, but if you find that confusing there’s another, mathematically equivalent way to think about it.

You can think about each force as having some sort of ultimate strength, the strength it would have if the world weren’t quantum. Without quantum mechanics, each force would interact with particles in only the simplest of ways, corresponding to the simplest diagram here.

However, our world is quantum mechanical. Because of that, when we try to measure the strength of a force, we’re not really measuring its “ultimate strength”. Rather, we’re measuring it alongside a whole mess of other interactions, corresponding to the other diagrams in that post. These extra contributions mean that what looks like the strength of the force gets stronger or weaker depending on the energy of the particles involved.

(I’m sweeping several things under the rug here, including a few infinities and electroweak unification. But if you just want a general understanding of what’s going on, this should be a good starting point.)

If you look at the plot, you’ll see the forces meet up somewhere around 10^16 GeV. They miss each-other for the faint, non-supersymmetric lines, but they meet fairly cleanly for the supersymmetric ones.

So (at least if supersymmetry is true), making the Yang-Mills forces have the same strength is not so hard. Putting them in the same mathematical group is where things get trickier. This is because any group that contains the groups of the fundamental forces will be “bigger” than just the sum of those forces: it will contain “extra forces” that we haven’t observed yet, and these forces can do unexpected things.

In particular, the “extra forces” predicted by GUTs usually make protons unstable. As far as we can tell, protons are very long-lasting: if protons decayed too fast, we wouldn’t have stars. So if protons decay, they must do it only very rarely, detectable only with very precise experiments. These experiments are powerful enough to rule out most of the simplest GUTs. The more complicated GUTs still haven’t been ruled out, but it’s enough to make fewer people interested in GUTs as a research topic.

What about Theories of Everything, or ToEs?

While GUT is a technical term, ToE is very much not. Instead, it’s a phrase that journalists have latched onto because it sounds cool. As such, it doesn’t really have a clear definition. Usually it means uniting gravity with the other fundamental forces, but occasionally people use it to refer to a theory that also unifies the various Standard Model particles into some sort of “final theory”.

Gravity is very different from the other fundamental forces, different enough that it’s kind of silly to group them as “fundamental forces” in the first place. Thus, while GUT models are the kind of thing one can cook up and tinker with, any ToE has to be based on some novel insight, one that lets you express gravity and Yang-Mills forces as part of the same structure.

So far, string theory is the only such insight we have access to. This isn’t just me being arrogant: while there are other attempts at theories of quantum gravity, aside from some rather dubious claims none of them are even interested in unifying gravity with other forces.

This doesn’t mean that string theory is necessarily right. But it does mean that if you want a different “theory of everything”, telling physicists to go out and find a new one isn’t going to be very productive. “Find a theory of everything” is a hope, not a research program, especially if you want people to throw out the one structure we have that even looks like it can do the job.

Four Gravitons and Some Wildly Irresponsible Amplitudes Predictions

My post on the “physics of decimals” a couple of weeks back caught physics blogger Luboš Motl’s attention, with predictable results. Mostly, this led to a rather unproductive debate about semantics, but he did bring up one thing that I think deserves some further clarification.

In my post, I asked you to imagine asking a genie for the full consequences of quantum field theory. Short of genie-based magic, is this the sort of thing I think it’s at all possible to know?

robinwilliams_aladdin

A Candle of Invocation? Sure, why not.

In a word, no.

The world is messy, not the sort of thing that tends to be described by neat exact solutions. That’s why we use approximations, and it’s why physicists can’t just step in and solve biology or psychology by deriving everything from first principles.

That said, the nice thing about approximations is that there’s often room for improvement. Sometimes this is quantitative, literally pushing to the next order of decimals, while sometimes it’s qualitative, viewing problems from a new perspective and attacking them from a new approach.

I’d like to give you some idea of the sorts of improvements I think are possible. I’ll focus on scattering amplitudes, since they’re my field. In order to be precise, I’ll be using technical terms here without much explanation; if you’re curious about something specific go ahead and ask in the comments. Finally, there are no implied time-scales here: I’ll be rating things based on whether I think they’re likely to eventually be understood, not on how long it will take us to get there.

Let’s begin with the most likely category:

Probably going to happen:

Mathematicians characterize the set of n-point cluster polylogarithms whose collinear limits are well-defined (n-1)-point cluster polylogarithms.

The seven-loop N=8 supergravity integrand is found, and the coefficient of its potential divergence is evaluated.

The dual Amplituhedron is found.

A general procedure is described for re-summing the L-loop coefficient of the Pentagon OPE for any L into a polylogarithmic form, at least at six points.

We figure out what the heck is up with the MHV-NMHV relation we found here.

Likely to happen, but there may be unforeseen complications:

N=8 supergravity is found to be finite at seven loops.

A symbol bootstrap becomes workable for QCD amplitudes at two or three loops, perhaps involving Landau singularities.

Something like a symbol bootstrap becomes workable for elliptic integrals, though it may only pass a “physicist” level of rigor.

Analogues to all of the work up to the actual Amplituhedron itself are performed for non-planar N=4 super Yang-Mills.

Quite possible, but I’m likely overoptimistic:

The space of n-point cluster polylogarithms whose collinear limits are well-defined (n-1)-point cluster polylogarithms that also obey the first entry condition and some number of final entry conditions turns out to be well-constrained enough that some all-loop all-point statements can be made, at least for MHV.

The enhanced cancellations observed in supergravity theories are understood, and used to provide a strong argument that N=8 supergravity is perturbatively finite.

All-multiplicity analytic QCD results at two loops, for at least the simpler helicity configurations.

The volume of the dual Amplituhedron is characterized by mathematicians and the connection to cluster polylogarithms is fully explored.

A non-planar Amplituhedron is found.

Less likely, but if all of the above happens I would not be all that surprised:

A way is found to double-copy the non-planar Amplituhedron to get an N=8 supergravity Amplituhedron.

The enhanced cancellations in N=8 supergravity turn out to be something “deep”: perhaps they are derivable from string theory, or provide a novel constraint on quantum gravity theories.

Various all-loop statements about the polylogarithms present in N=4 are used to make more restricted all-loop statements about QCD.

The Pentagon OPE is re-summed for finite coupling, if not into known functions than into a form that admits good numerics and various analytic manipulations. Alternatively, the sorts of functions that the Pentagon OPE can sum to are characterized and a bootstrap procedure becomes viable for them.

Irresponsible speculations, suited to public talks or grant applications:

The N=8 Amplituhedron leads to some sort of reformulation of space-time in a way that solves various quantum gravity paradoxes.

The sorts of mathematical objects found in the finite-coupling resummation of the Pentagon OPE lead to a revival of the original analytic S-matrix program, now with an actual chance to succeed.

Extremely unlikely:

Analytic all-loop QCD results.

Magical genie land:

Analytic finite coupling QCD results.

Symbology 101

I work with functions called polylogarithms. There’s a whole field of techniques out there for manipulating these functions, and for better or worse people often refer to them as symbology.

My plan for this post is to give a general feel for how symbology works: what we know how to do, and why. It’s going to be a lot more technical than my usual posts, so the lay reader may want to skip this one. At the same time, I’m not planning to go through anything rigorously. If you want that sort of thing there are plenty of good papers on the subject, here’s one of mine that covers the basics. Rather, I’m going to draw what I hope is an illuminating sketch of what it is we do.

Still here? Let’s start with an easy question.

What’s a log?

balch_park_hollow_log

Ok, besides one of these.

For our purposes, a log is what happens when you integrate dx/x.

\log x=\int \frac{dx}{x}

 Schematically, a polylog is then what happens when you iterate these integrations:

G=\int \frac{dx_1}{x_1} \int \frac{dx_2}{x_2}\ldots

The simplest thing you can get from this is of course just a product of logs. The next most simple thing is one of the classical polylogarithms. But in general, this is a much wider class of functions, known as multiple, or Goncharov, polylogarithms.

The number of integrations is the transcendental weight. Naively, you’d expect an L-loop Feynman integral in four dimensions to give you something with transcendental weight 4L. In practice, that’s not the case: some of the momentum integrations end up just giving delta functions, so in the end an L-loop amplitude has transcendental weight 2L.

In most theories, you get a mix of functions: some with weight 2L, some with weight 2L-1, etc., all the way down to rational functions. N=4 super Yang-Mills is special: there, everything is at the maximum transcendental weight. In either case, though, being able to manipulate transcendental functions is very useful, and the symbol is one of the simplest ways to do so.

The core idea of the symbol is pretty easy to state, though it takes a bit more technology to state it rigorously. Essentially, we take our schematic polylog from above, and just list the logs:

\mathcal{S}(G)=\ldots\otimes x_2\otimes x_1

(Here I have switched the order in order to agree with standard conventions.)

What does that do? Well, it reminds us that these aren’t just some weird functions we don’t understand: they’re collections of logs, and we can treat them like collections of logs.

In particular, we can do this with logs,

\log (x y)=\log x+\log y

so we can do it with symbols as well:

x_1\otimes x y\otimes x_3=x_1\otimes x \otimes x_3+x_1\otimes y\otimes x_3

Similarly, we can always get rid of unwelcome exponents, like so:

\log (x^n)=n\log x

x_1\otimes x^n\otimes x_3=n( x_1\otimes x \otimes x_3)

This means that, in general, we can always factorize any polynomial or rational function that appears in a symbol. As such, we often express symbols in terms of some fixed symbol alphabet, a basis of rational functions that can be multiplied to get any symbol entry in the function we’re working with. In general, it’s a lot easier to calculate amplitudes when we know the symbol alphabet beforehand. For six-particle amplitudes in N=4 super Yang-Mills, the symbol alphabet contains just nine “letters”, which makes it particularly easy to work with.

That’s arguably the core of symbol methods. It’s how Spradlin and Volovich managed to get a seventeen-page expression down to two lines. Express a symbol in the right alphabet, and it tends to look a lot more simple. And once you know the right alphabet, it’s pretty straightforward to build an ansatz with it and constrain it until you get a candidate function for whatever you’re interested in.

There’s more technical detail I could give here: how to tell whether a symbol actually corresponds to a function, how to take limits and do series expansions and take derivatives and discontinuities…but I’m not sure whether anyone reading this would be interested.

As-is, I’ll just mention that the symbol is only part of the story. In particular, it’s a special case of something called a coproduct, which breaks up polylogarithms into various chunks. Break them down fully until each chunk is just an individual log, and you get the symbol. Break them into larger chunks, and you get other components of the coproduct, consisting of tensor products of polylogarithms with lower transcendental weight. These larger chunks mean we can capture as much of a function’s behavior as we like, while still taking advantage of these sorts of tricks. While in older papers you might have seen mention of “beyond-the-symbol” terms that the symbol couldn’t capture, this doesn’t tend to be a problem these days.

PSI Winter School

I’m at the Perimeter Scholars International Winter School this week. Perimeter Scholars International is Perimeter’s one-of-a-kind master’s program in theoretical physics, that jams the basics of theoretical physics into a one-year curriculum. We’ve got students from all over the world, including plenty of places that don’t get any snow at all. As such, it was decided that the students need to spend a week somewhere with even more snow than Waterloo: Musoka, Ontario.

IMG_20160127_152710

A place that occasionally manages to be this photogenic

This isn’t really a break for them, though, which is where I come in. The students have been organized into groups, and each group is working on a project. My group’s project is related to the work of integrability master Pedro Vieira. He and his collaborators came up with a way to calculate scattering amplitudes in N=4 super Yang-Mills without the usual process of loop-by-loop approximations. However, this method comes at a price: a new approximation, this time to low energy. This approximation is step-by-step, like loops, but in a different direction. It’s called the Pentagon Operator Product Expansion, or POPE for short.

IMG_20160127_123210

Approach the POPE, and receive a blessing

What we’re trying to do is go back and add up all of the step-by-step terms in the approximation, to see if we can match to the old expansion in loops. One of Pedro’s students recently managed to do this for the first approximation (“tree” diagrams), and the group here at the Winter School is trying to use her (still unpublished) work as a jumping-off point to get to the first loop. Time will tell whether we’ll succeed…but we’re making progress, and the students are learning a lot.

Trust Your Notation as Far as You Can Prove It

Calculus contains one of the most famous examples of physicists doing something silly that irritates mathematicians. See, there are two different ways to write down a derivative, both dating back to the invention of calculus: Newton’s method, and Leibniz’s method.

Newton cared a lot about rigor (enough that he actually published his major physics results without calculus because he didn’t think calculus was rigorous enough, despite inventing it himself). His notation is direct and to the point: if you want to take the derivative of a function f of x, you write,

f'(x)

Leibniz cared a lot less about rigor, and a lot more about the scientific community. He wanted his notation to be useful and intuitive, to be the sort of thing that people would pick up and run with. To write a derivative in Leibniz notation, you write,

\frac{df}{dx}

This looks like a fraction. It’s really, really tempting to treat it like a fraction. And that’s the point: it’s to tell you that treating it like a fraction is often the right thing to do. In particular, you can do something like this,

y=\frac{df}{dx}

y dx=df

\int y dx=\int df

and what you did actually makes a certain amount of sense.

The tricky thing here is that it doesn’t always make sense. You can do these sorts of tricks up to a point, but you need to be aware that they really are just tricks. Take the notation too seriously, and you end up doing things you aren’t really allowed to do. It’s always important to stay aware of what you’re really doing.

There are a lot of examples of this kind of thing in physics. In quantum field theory, we use path integrals. These aren’t really integrals…but a lot of the time, we can treat them as such. Operators in quantum mechanics can be treated like numbers and multiplied…up to a point. A friend of mine was recently getting confused by operator product expansions, where similar issues crop up.

I’ve found two ways to clear up this kind of confusion. One is to unpack your notation: go back to the definitions, and make sure that what you’re doing really makes sense. This can be tedious, but you can be confident that you’re getting the right answer.

The other option is to stop treating your notation like the familiar thing it resembles, and start treating it like uncharted territory. You’re using this sort of notation to remind you of certain operations you can do, certain rules you need to follow. If you take those rules as basic, you can think about what you’re doing in terms of axioms rather than in terms of the suggestions made by your notation. Follow the right axioms, and you’ll stay within the bounds of what you’re actually allowed to do.

Either way, familiar-looking notation can help your intuition, making calculations more fluid. Just don’t trust it farther than you can prove it.

Amplitudes for the New Year

Ah, the new year, time of new year’s resolutions. While some people resolve to go to the gym or take up online dating, physicists resolve to finally get that paper out.

At least, that’s the impression I get, given the number of papers posted to arXiv in the last month. Since a lot of them were amplitudes-related, I figured I’d go over some highlights.

Everyone once in a while people ask me for the latest news on the amplituhedron. While I don’t know what Nima is working on right now, I can point to what others have been doing. Zvi Bern, Jaroslav Trnka, and collaborators have continued to make progress towards generalizing the amplituhedron to non-planar amplitudes. Meanwhile, a group in Europe has been working on solving an issue I’ve glossed over to some extent. While the amplituhedron is often described as calculating an amplitude as the volume of a geometrical object, in fact there is a somewhat more indirect procedure involved in going from the geometrical object to the amplitude. It would be much simpler if the amplitude was actually the volume of some (different) geometrical object, and that’s what these folks are working towards. Finally, Daniele Galloni has made progress on solving a technical issue: the amplituhedron gives a mathematical recipe for the amplitude, but it doesn’t tell you how to carry out that recipe, and Galloni provides an algorithm for part of this process.

With this new algorithm, is the amplituhedron finally as efficient as older methods? Typically, the way to show that is to do a calculation with the amplituhedron that wasn’t possible before. It doesn’t look like that’s happening soon though, as Jake Bourjaily and collaborators compute an eight-loop integrand using one of the more successful of the older methods. Their paper provides a good answer to the perennial question, “why more loops?” What they find is that some of the assumptions that people made at lower loops fail to hold at this high loop order, and it becomes increasingly important to keep track of exactly how far your symmetries can take you.

Back when I visited Brown, I talked to folks there about some ongoing work. Now that they’ve published, I can talk about it. A while back, Juan Maldacena resurrected an old technique of Landau’s to solve a problem in AdS/CFT. In that paper, he suggested that Landau’s trick might help prove some of the impressive simplifications in N=4 super Yang-Mills that underlie my work and the work of those at Brown. In their new paper, the Brown group finds that, while useful, Landau’s trick doesn’t seem to fully explain the simplicity they’ve discovered. To get a little partisan, I have to say that this was largely the result I expected, and that it felt a bit condescending for Maldacena to assume that an old trick like that from the Feynman diagram era could really be enough to explain one of the big discoveries of amplitudeology.

There was also a paper by Freddy Cachazo and collaborators on an interesting trick to extend their CHY string to one-loop, and one by Bo Feng and collaborators on an intriguing new method called Q-cuts that I will probably say more about in future, but I’ll sign off for now. I’ve got my own new years’ physics resolutions, and I ought to get back to work!