# Symbology 101

I work with functions called polylogarithms. There’s a whole field of techniques out there for manipulating these functions, and for better or worse people often refer to them as symbology.

My plan for this post is to give a general feel for how symbology works: what we know how to do, and why. It’s going to be a lot more technical than my usual posts, so the lay reader may want to skip this one. At the same time, I’m not planning to go through anything rigorously. If you want that sort of thing there are plenty of good papers on the subject, here’s one of mine that covers the basics. Rather, I’m going to draw what I hope is an illuminating sketch of what it is we do.

What’s a log?

Ok, besides one of these.

For our purposes, a log is what happens when you integrate dx/x.

$\log x=\int \frac{dx}{x}$

Schematically, a polylog is then what happens when you iterate these integrations:

$G=\int \frac{dx_1}{x_1} \int \frac{dx_2}{x_2}\ldots$

The simplest thing you can get from this is of course just a product of logs. The next most simple thing is one of the classical polylogarithms. But in general, this is a much wider class of functions, known as multiple, or Goncharov, polylogarithms.

The number of integrations is the transcendental weight. Naively, you’d expect an L-loop Feynman integral in four dimensions to give you something with transcendental weight 4L. In practice, that’s not the case: some of the momentum integrations end up just giving delta functions, so in the end an L-loop amplitude has transcendental weight 2L.

In most theories, you get a mix of functions: some with weight 2L, some with weight 2L-1, etc., all the way down to rational functions. N=4 super Yang-Mills is special: there, everything is at the maximum transcendental weight. In either case, though, being able to manipulate transcendental functions is very useful, and the symbol is one of the simplest ways to do so.

The core idea of the symbol is pretty easy to state, though it takes a bit more technology to state it rigorously. Essentially, we take our schematic polylog from above, and just list the logs:

$\mathcal{S}(G)=\ldots\otimes x_2\otimes x_1$

(Here I have switched the order in order to agree with standard conventions.)

What does that do? Well, it reminds us that these aren’t just some weird functions we don’t understand: they’re collections of logs, and we can treat them like collections of logs.

In particular, we can do this with logs,

$\log (x y)=\log x+\log y$

so we can do it with symbols as well:

$x_1\otimes x y\otimes x_3=x_1\otimes x \otimes x_3+x_1\otimes y\otimes x_3$

Similarly, we can always get rid of unwelcome exponents, like so:

$\log (x^n)=n\log x$

$x_1\otimes x^n\otimes x_3=n( x_1\otimes x \otimes x_3)$

This means that, in general, we can always factorize any polynomial or rational function that appears in a symbol. As such, we often express symbols in terms of some fixed symbol alphabet, a basis of rational functions that can be multiplied to get any symbol entry in the function we’re working with. In general, it’s a lot easier to calculate amplitudes when we know the symbol alphabet beforehand. For six-particle amplitudes in N=4 super Yang-Mills, the symbol alphabet contains just nine “letters”, which makes it particularly easy to work with.

That’s arguably the core of symbol methods. It’s how Spradlin and Volovich managed to get a seventeen-page expression down to two lines. Express a symbol in the right alphabet, and it tends to look a lot more simple. And once you know the right alphabet, it’s pretty straightforward to build an ansatz with it and constrain it until you get a candidate function for whatever you’re interested in.

There’s more technical detail I could give here: how to tell whether a symbol actually corresponds to a function, how to take limits and do series expansions and take derivatives and discontinuities…but I’m not sure whether anyone reading this would be interested.

As-is, I’ll just mention that the symbol is only part of the story. In particular, it’s a special case of something called a coproduct, which breaks up polylogarithms into various chunks. Break them down fully until each chunk is just an individual log, and you get the symbol. Break them into larger chunks, and you get other components of the coproduct, consisting of tensor products of polylogarithms with lower transcendental weight. These larger chunks mean we can capture as much of a function’s behavior as we like, while still taking advantage of these sorts of tricks. While in older papers you might have seen mention of “beyond-the-symbol” terms that the symbol couldn’t capture, this doesn’t tend to be a problem these days.

## 3 thoughts on “Symbology 101”

1. Jan Reimers

Thanks for a great intro … I tried to search “symbology” to start learning about this and got everything but what I was interested in!!
I recall reading somewhere that the sunrise diagram Feynman integral cannot be expressed in terms of polylogs. What is going on there? I suppose this diagram not allowed in N=4 SYM since it has 4 point vertices.

Thanks
Jan

Like

Yeah, you’re less likely to find a good intro under “symbology”…while it gets referred to that way sometimes in talks, papers usually just refer to “the symbol” or more specifically to polylogarithms and the like.

Yeah, the sunrise diagram gives a different type of function, sort of a generalization of polylogarithms called elliptic polylogarithms, that we have a lot less control over. They actually do look like they show up in N=4 as well (so I lied a little in my post), but only for ten-particle amplitudes and higher.

Like