Tag Archives: mathematics

Hexagon Functions VI: The Power Cosmic

I have a new paper out this week. It’s the long-awaited companion to a paper I blogged about a few months back, itself the latest step in a program that has made up a major chunk of my research.

The title is a bit of a mouthful, but I’ll walk you through it:

The Cosmic Galois Group and Extended Steinmann Relations for Planar N = 4 SYM Amplitudes

I calculate scattering amplitudes (roughly, probabilities that elementary particles bounce off each other) in a (not realistic, and not meant to be) theory called planar N=4 super-Yang-Mills (SYM for short). I can’t summarize everything we’ve been doing here, but if you read the blog posts I linked above and some of the Handy Handbooks linked at the top of the page you’ll hopefully get a clearer picture.

We started using the Steinmann Relations a few years ago. Discovered in the 60’s, the Steinmann relations restrict the kind of equations we can use to describe particle physics. Essentially, they mean that particles can’t travel two ways at once. In this paper, we extend the Steinmann relations beyond Steinmann’s original idea. We don’t yet know if we can prove this extension works, but it seems to be true for the amplitudes we’re calculating. While we’ve presented this in talks before, this is the first time we’ve published it, and it’s one of the big results of this paper.

The other, more exotic-sounding result, has to do with something called the Cosmic Galois Group.

Évariste Galois, the famously duel-prone mathematician, figured out relations between algebraic numbers (that is, numbers you can get out of algebraic equations) in terms of a mathematical structure called a group. Today, mathematicians are interested not just in algebraic numbers, but in relations between transcendental numbers as well, specifically a kind of transcendental number called a period. These numbers show up a lot in physics, so mathematicians have been thinking about a Galois group for transcendental numbers that show up in physics, a so-called Cosmic Galois Group.

(Cosmic here doesn’t mean it has to do with cosmology. As far as I can tell, mathematicians just thought it sounded cool and physics-y. They also started out with rather ambitious ideas about it, if you want a laugh check out the last few paragraphs of this talk by Cartier.)

For us, Cosmic Galois Theory lets us study the unusual numbers that show up in our calculations. Doing this, we’ve noticed that certain numbers simply don’t show up. For example, the Riemann zeta function shows up often in our results, evaluated at many different numbers…but never evaluated at the number three. Nor does any number related to that one through the Cosmic Galois Group show up. It’s as if the theory only likes some numbers, and not others.

This weird behavior has been observed before. Mathematicians can prove it happens for some simple theories, but it even applies to the theories that describe the real world, for example to calculations of the way an electron’s path is bent by a magnetic field. Each theory seems to have its own preferred family of numbers.

For us, this has been enormously useful. We calculate our amplitudes by guesswork, starting with the right “alphabet” and then filling in different combinations, as if we’re trying all possible answers to a word jumble. Cosmic Galois Theory and Extended Steinmann have enabled us to narrow down our guess dramatically, making it much easier and faster to get to the right answer.

More generally though, we hope to contribute to mathematicians’ investigations of Cosmic Galois Theory. Our examples are more complicated than the simple theories where they currently prove things, and contain more data than the more limited results from electrons. Hopefully together we can figure out why certain numbers show up and others don’t, and find interesting mathematical principles behind the theories that govern fundamental physics.

For now, I’ll leave you with a preview of a talk I’m giving in a couple weeks’ time:

The font, of course, is Cosmic Sans

Experimental Theoretical Physics

I was talking with some other physicists about my “Black Box Theory” thought experiment, where theorists have to compete with an impenetrable block of computer code. Even if the theorists come up with a “better” theory, that theory won’t predict anything that the code couldn’t already. If “predicting something new” is an essential part of science, then the theorists can no longer do science at all.

One of my colleagues made an interesting point: in the thought experiment, the theorists can’t predict new behaviors of reality. But they can predict new behaviors of the code.

Even when we have the right theory to describe the world, we can’t always calculate its consequences. Often we’re stuck in the same position as the theorists in the thought experiment, trying to understand the output of a theory that might as well be a black box. Increasingly, we are employing a kind of “experimental theoretical physics”. We try to predict the result of new calculations, just as experimentalists try to predict the result of new experiments.

This experimental approach seems to be a genuine cultural difference between physics and mathematics. There is such a thing as experimental mathematics, to be clear. And while mathematicians prefer proof, they’re not averse to working from a good conjecture. But when mathematicians calculate and conjecture, they still try to set a firm foundation. They’re precise about what they mean, and careful about what they imply.

“Experimental theoretical physics”, on the other hand, is much more like experimental physics itself. Physicists look for plausible patterns in the “data”, seeing if they make sense in some “physical” way. The conjectures aren’t always sharply posed, and the leaps of reasoning are often more reckless than the leaps of experimental mathematicians. We try to use intuition gleaned from a history of experiments on, and calculations about, the physical world.

There’s a real danger here, because mathematical formulas don’t behave like nature does. When we look at nature, we expect it to behave statistically. If we look at a large number of examples, we get more and more confident that they represent the behavior of the whole. This is sometimes dangerous in nature, but it’s even more dangerous in mathematics, because it’s often not clear what a good “sample” even is. Proving something is true “most of the time” is vastly different from proving it is true all of the time, especially when you’re looking at an infinity of possible examples. We can’t meet our favorite “five sigma” level of statistical confidence, or even know if we’re close.

At the same time, experimental theoretical physics has real power. Experience may be a bad guide to mathematics, but it’s a better guide to the mathematics that specifically shows up in physics. And in practice, our recklessness can accomplish great things, uncovering behaviors mathematicians would never have found by themselves.

The key is to always keep in mind that the two fields are different. “Experimental theoretical physics” isn’t mathematics, and it isn’t pretending to be, any more than experimental physics is pretending to be theoretical physics. We’re gathering data and advancing tentative explanations, but we’re fully aware that they may not hold up when examined with full rigor. We want to inspire, to raise questions and get people to think about the principles that govern the messy physical theories we use to describe our world. Experimental physics, theoretical physics, and mathematics are all part of a shared ecosystem, and each has its role to play.

Two Loops, Five Particles

There’s a very long-term view of the amplitudes field that gets a lot of press. We’re supposed to be eliminating space and time, or rebuilding quantum field theory from scratch. We build castles in the clouds, seven-loop calculations and all-loop geometrical quantum jewels.

There’s a shorter-term problem, though, that gets much less press, despite arguably being a bigger part of the field right now. In amplitudes, we take theories and turn them into predictions, order by order and loop by loop. And when we want to compare those predictions to the real world, in most cases the best we can do is two loops and five particles.

Five particles here counts the particles coming in and going out: if two gluons collide and become three gluons, we count that as five particles, two in plus three out. Loops, meanwhile, measure the complexity of the calculation, the number of closed paths you can draw in a Feynman diagram. If you use more loops, you expect more precision: you’re approximating nature step by step.

As a field we’re pretty good at one-loop calculations, enough to do them for pretty much any number of particles. As we try for more loops though, things rapidly get harder. Already for two loops, in many cases, we start struggling. We can do better if we dial down the number of particles: there are three-particle and two-particle calculations that get up to three, four, or even five loops. For more particles though, we can’t do as much. Thus the current state of the art, the field’s short term goal: two loops, five particles.

When you hear people like me talk about crazier calculations, we’ve usually got a trick up our sleeve. Often we’re looking at a much simpler theory, one that doesn’t describe the real world. For example, I like working with a planar theory, with lots of supersymmetry. Remove even one of those simplifications, and suddenly our life becomes a lot harder. Instead of seven loops and six particles, we get genuinely excited about, well, two loops five particles.

Luckily, two loops five particles is also about as good as the experiments can measure. As the Large Hadron Collider gathers more data, it measures physics to higher and higher precision. Currently for five-particle processes, its precision is just starting to be comparable with two-loop calculations. The result has been a flurry of activity, applying everything from powerful numerical techniques to algebraic geometry to the problem, getting results that genuinely apply to the real world.

“Two loops, five particles” isn’t as cool of a slogan as “space-time is doomed”. It doesn’t get much, or any media attention. But, steadily and quietly, it’s become one of the hottest topics in the amplitudes field.

Research Rooms, Collaboration Spaces

Math and physics are different fields with different cultures. Some of those differences are obvious, others more subtle.

I recently remembered a subtle difference I noticed at the University of Waterloo. The math building there has “research rooms”, rooms intended for groups of mathematicians to collaborate. The idea is that you invite visitors to the department, reserve the room, and spend all day with them trying to iron out a proof or the like.

Theoretical physicists collaborate like this sometimes too, but in my experience physics institutes don’t typically have this kind of “research room”. Instead, they have “collaboration spaces”. Unlike a “research room”, you don’t reserve a “collaboration space”. Typically, they aren’t even rooms: they’re a set of blackboards in the coffee room, or a cluster of chairs in the corner between two hallways. They’re open spaces, designed so that passers-by can overhear the conversation and (potentially) join in.

That’s not to say physicists never shut themselves in a room for a day (or night) to work. But when they do, it’s not usually in a dedicated space. Instead, it’s in an office, or a commandeered conference room.

Waterloo’s “research rooms” and physics institutes’ “collaboration spaces” can be used for similar purposes. The difference is in what they encourage.

The point of a “collaboration space” is to start new collaborations. These spaces are open in order to take advantage of serendipity: if you’re getting coffee or walking down the hall, you might hear something interesting and spark something new, with people you hadn’t planned to collaborate with before. Institutes with “collaboration spaces” are trying to make new connections between researchers, to be the starting point for new ideas.

The point of a “research room” is to finish a collaboration. They’re for researchers who are already collaborating, who know they’re going to need a room and can reserve it in advance. They’re enclosed in order to shut out distractions, to make sure the collaborators can sit down and focus and get something done. Institutes with “research rooms” want to give their researchers space to complete projects when they might otherwise be too occupied with other things.

I’m curious if this difference is more widespread. Do math departments generally tend to have “research rooms” or “collaboration spaces”? Are there physics departments with “research rooms”? I suspect there is a real cultural difference here, in what each field thinks it needs to encourage.

Amplitudes in String and Field Theory at NBI

There’s a conference at the Niels Bohr Institute this week, on Amplitudes in String and Field Theory. Like the conference a few weeks back, this one was funded by the Simons Foundation, as part of Michael Green’s visit here.

The first day featured a two-part talk by Michael Green and Congkao Wen. They are looking at the corrections that string theory adds on top of theories of supergravity. These corrections are difficult to calculate directly from string theory, but one can figure out a lot about them from the kinds of symmetry and duality properties they need to have, using the mathematics of modular forms. While Michael’s talk introduced the topic with a discussion of older work, Congkao talked about their recent progress looking at this from an amplitudes perspective.

Francesca Ferrari’s talk on Tuesday also related to modular forms, while Oliver Schlotterer and Pierre Vanhove talked about a different corner of mathematics, single-valued polylogarithms. These single-valued polylogarithms are of interest to string theorists because they seem to connect two parts of string theory: the open strings that describe Yang-Mills forces and the closed strings that describe gravity. In particular, it looks like you can take a calculation in open string theory and just replace numbers and polylogarithms with their “single-valued counterparts” to get the same calculation in closed string theory. Interestingly, there is more than one way that mathematicians can define “single-valued counterparts”, but only one such definition, the one due to Francis Brown, seems to make this trick work. When I asked Pierre about this he quipped it was because “Francis Brown has good taste…either that, or String Theory has good taste.”

Wednesday saw several talks exploring interesting features of string theory. Nathan Berkovitz discussed his new paper, which makes a certain context of AdS/CFT (a duality between string theory in certain curved spaces and field theory on the boundary of those spaces) manifest particularly nicely. By writing string theory in five-dimensional AdS space in the right way, he can show that if the AdS space is small it will generate the same Feynman diagrams that one would use to do calculations in N=4 super Yang-Mills. In the afternoon, Sameer Murthy showed how localization techniques can be used in gravity theories, including to calculate the entropy of black holes in string theory, while Yvonne Geyer talked about how to combine the string theory-like CHY method for calculating amplitudes with supersymmetry, especially in higher dimensions where the relevant mathematics gets tricky.

Thursday ended up focused on field theory. Carlos Mafra was originally going to speak but he wasn’t feeling well, so instead I gave a talk about the “tardigrade” integrals I’ve been looking at. Zvi Bern talked about his work applying amplitudes techniques to make predictions for LIGO. This subject has advanced a lot in the last few years, and now Zvi and collaborators have finally done a calculation beyond what others had been able to do with older methods. They still have a way to go before they beat the traditional methods overall, but they’re off to a great start. Lance Dixon talked about two-loop five-particle non-planar amplitudes in N=4 super Yang-Mills and N=8 supergravity. These are quite a bit trickier than the planar amplitudes I’ve worked on with him in the past, in particular it’s not yet possible to do this just by guessing the answer without considering Feynman diagrams.

Today was the last day of the conference, and the emphasis was on number theory. David Broadhurst described some interesting contributions from physics to mathematics, in particular emphasizing information that the Weierstrass formulation of elliptic curves omits. Eric D’Hoker discussed how the concept of transcendentality, previously used in field theory, could be applied to string theory. A few of his speculations seemed a bit farfetched (in particular, his setup needs to treat certain rational numbers as if they were transcendental), but after his talk I’m a bit more optimistic that there could be something useful there.

Pi Day Alternatives

On Pi Day, fans of the number pi gather to recite its digits and eat pies. It is the most famous of numerical holidays, but not the only one. Have you heard of the holidays for other famous numbers?

Tau Day: Celebrated on June 28. Observed by sitting around gloating about how much more rational one is than everyone else, then getting treated with high-energy tau leptons for terminal pedantry.

Canadian Modular Pi Day: Celebrated on February 3. Observed by confusing your American friends.

e Day: Celebrated on February 7. Observed in middle school classrooms, explaining the wonders of exponential functions and eating foods like eggs and eclairs. Once the students leave, drop tabs of ecstasy instead.

Golden Ratio Day: Celebrated on January 6. Rub crystals on pyramids and write vaguely threatening handwritten letters to every physicist you’ve heard of.

Euler Gamma Day: Celebrated on May 7 by dropping on the floor and twitching.

Riemann Zeta Daze: The first year, forget about it. The second, celebrate on January 6. The next year, January 2. After that, celebrate on New Year’s Day earlier and earlier in the morning each year until you can’t tell the difference any more.

This Week, at Scientific American

I’ve written an article for Scientific American! It went up online this week, the print versions go out on the 25th. The online version is titled “Loopy Particle Math”, the print one is “The Particle Code”, but they’re the same article.

For those who don’t subscribe to Scientific American, sorry about the paywall!

“The Particle Code” covers what will be familiar material to regulars on this blog. I introduce Feynman diagrams, and talk about the “amplitudeologists” who try to find ways around them. I focus on my corner of the amplitudes field, how the work of Goncharov, Spradlin, Vergu, and Volovich introduced us to “symbology”, a set of tricks for taking apart more complicated integrals (or “periods”) into simple logarithmic building blocks. I talk about how my collaborators and I use symbology, using these building blocks to compute amplitudes that would have been impossible with other techniques. Finally, I talk about the frontier of the field, the still-mysterious “elliptic polylogarithms” that are becoming increasingly well-understood.

(I don’t talk about the even more mysterious “Calabi-Yau polylogarithms“…another time for those!)

Working with Scientific American was a fun experience. I got to see how the professionals do things. They got me to clarify and explain, pointing out terms I needed to define and places I should pause to summarize. They took my rough gel-pen drawings and turned them into polished graphics. While I’m still a little miffed about them removing all the contractions, overall I learned a lot, and I think they did a great job of bringing the article to the printed page.