# Tag Archives: mathematics

I have a new paper out today, with Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, Cristian Vergu and Matthias Volk.

There’s a story I’ve told before on this blog, about a kind of “alphabet” for particle physics predictions. When we try to make a prediction in particle physics, we need to do complicated integrals. Sometimes, these integrals simplify dramatically, in unexpected ways. It turns out we can understand these simplifications by writing the integrals in a sort of “alphabet”, breaking complicated mathematical “periods” into familiar logarithms. If we want to simplify an integral, we can use relations between logarithms like these:

$\log(a b)=\log(a)+\log(b),\quad \log(a^n)=n\log(a)$

to factor our “alphabet” into pieces as simple as possible.

The simpler the alphabet, the more progress you can make. And in the nice toy model theory we’re working with, the alphabets so far have been simple in one key way. Expressed in the right variables, they’re rational. For example, they contain no square roots.

Would that keep going? Would we keep finding rational alphabets? Or might the alphabets, instead, have square roots?

After some searching, we found a clean test case. There was a calculation we could do with just two Feynman diagrams. All we had to do was subtract one from the other. If they still had square roots in their alphabet, we’d have proven that the nice, rational alphabets eventually had to stop.

So we calculated these diagrams, doing the complicated integrals. And we found they did indeed have square roots in their alphabet, in fact many more than expected. They even had square roots of square roots!

You’d think that would be the end of the story. But square roots are trickier than you’d expect.

Remember that to simplify these integrals, we break them up into an alphabet, and factor the alphabet. What happens when we try to do that with an alphabet that has square roots?

Suppose we have letters in our alphabet with $\sqrt{-5}$. Suppose another letter is the number 9. You might want to factor it like this:

$9=3\times 3$

Simple, right? But what if instead you did this:

$9=(2+ \sqrt{-5} )\times(2- \sqrt{-5} )$

Once you allow $\sqrt{-5}$ in the game, you can factor 9 in two different ways. The central assumption, that you can always just factor your alphabet, breaks down. In mathematical terms, you no longer have a unique factorization domain.

Instead, we had to get a lot more mathematically sophisticated, factoring into something called prime ideals. We got that working and started crunching through the square roots in our alphabet. Things simplified beautifully: we started with a result that was ten million terms long, and reduced it to just five thousand. And at the end of the day, after subtracting one integral from the other…

We found no square roots!

After all of our simplifications, all the letters we found were rational. Our nice test case turned out much, much simpler than we expected.

It’s been a long road on this calculation, with a lot of false starts. We were kind of hoping to be the first to find square root letters in these alphabets; instead it looks like another group will beat us to the punch. But we developed a lot of interesting tricks along the way, and we thought it would be good to publish our “null result”. As always in our field, sometimes surprising simplifications are just around the corner.

# Calabi-Yaus in Feynman Diagrams: Harder and Easier Than Expected

I’ve got a new paper up, about the weird geometrical spaces we keep finding in Feynman diagrams.

With Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, and most recently Cristian Vergu and Matthias Volk, I’ve been digging up odd mathematics in particle physics calculations. In several calculations, we’ve found that we need a type of space called a Calabi-Yau manifold. These spaces are often studied by string theorists, who hope they represent how “extra” dimensions of space are curled up. String theorists have found an absurdly large number of Calabi-Yau manifolds, so many that some are trying to sift through them with machine learning. We wanted to know if our situation was quite that ridiculous: how many Calabi-Yaus do we really need?

So we started asking around, trying to figure out how to classify our catch of Calabi-Yaus. And mostly, we just got confused.

It turns out there are a lot of different tools out there for understanding Calabi-Yaus, and most of them aren’t all that useful for what we’re doing. We went in circles for a while trying to understand how to desingularize toric varieties, and other things that will sound like gibberish to most of you. In the end, though, we noticed one small thing that made our lives a whole lot simpler.

It turns out that all of the Calabi-Yaus we’ve found are, in some sense, the same. While the details of the physics varies, the overall “space” is the same in each case. It’s a space we kept finding for our “Calabi-Yau bestiary”, but it turns out one of the “traintrack” diagrams we found earlier can be written in the same way. We found another example too, a “wheel” that seems to be the same type of Calabi-Yau.

We also found many examples that we don’t understand. Add another rung to our “traintrack” and we suddenly can’t write it in the same space. (Personally, I’m quite confused about this one.) Add another spoke to our wheel and we confuse ourselves in a different way.

So while our calculation turned out simpler than expected, we don’t think this is the full story. Our Calabi-Yaus might live in “the same space”, but there are also physics-related differences between them, and these we still don’t understand.

At some point, our abstract included the phrase “this paper raises more questions than it answers”. It doesn’t say that now, but it’s still true. We wrote this paper because, after getting very confused, we ended up able to say a few new things that hadn’t been said before. But the questions we raise are if anything more important. We want to inspire new interest in this field, toss out new examples, and get people thinking harder about the geometry of Feynman integrals.

# Communicating the Continuum Hypothesis

I have a friend who is shall we say, pessimistic, about science communication. He thinks it’s too much risk for too little gain, too many misunderstandings while the most important stuff is so abstract the public will never understand it anyway. When I asked him for an example, he started telling me about a professor who works on the continuum hypothesis.

The continuum hypothesis is about different types of infinity. You might have thought there was only one type of infinity, but in the nineteenth century the mathematician Georg Cantor showed there were more, the most familiar of which are countable and uncountable. If you have a countably infinite number of things, then you can “count” them, “one, two, three…”, assigning a number to each one (even if, since they’re still infinite, you never actually finish). To imagine something uncountably infinite, think of a continuum, like distance on a meter stick, where you can always look at smaller and smaller distances. Cantor proved, using various ingenious arguments, that these two types of infinity are different: the continuum is “bigger” than a mere countable infinity.

Cantor wondered if there could be something in between, a type of infinity bigger than countable and smaller than uncountable. His hypothesis (now called the continuum hypothesis) was that there wasn’t: he thought there was no type of infinite between countable and uncountable.

(If you think you have an easy counterexample, you’re wrong. In particular, fractions are countable.)

Kurt Gödel didn’t prove the continuum hypothesis, but in 1940 he showed that at least it couldn’t be disproved, which you’d think would be good enough. In 1964, though, another mathematician named Paul Cohen showed that the continuum hypothesis also can’t be proved, at least with mathematicians’ usual axioms.

In science, if something can’t be proved or disproved, then we shrug our shoulders and say we don’t know. Math is different. In math, we choose the axioms. All we have to do is make sure they’re consistent.

What Cohen and Gödel really showed is that mathematics is consistent either way: if the continuum hypothesis is true or false, the rest of mathematics still works just as well. You can add it as an extra axiom, and add-on that gives you different types of infinity but doesn’t change everyday arithmetic.

You might think that this, finally, would be the end of the story. Instead, it was the beginning of a lively debate that continues to this day. It’s a debate that touches on what mathematics is for, whether infinity is merely a concept or something out there in the world, whether some axioms are right or wrong and what happens when you change them. It involves attempts to codify intuition, arguments about which rules “make sense” that blur the boundary between philosophy and mathematics. It also involves the professor my friend mentioned, W. H. Woodin.

Now, can I explain Woodin’s research to you?

No. I don’t understand it myself, it’s far more abstract and weird than any mathematics I’ve ever touched.

Despite that, I can tell you something about it. I can tell you about the quest he’s on, its history and its relevance, what is and is not at stake. I can get you excited, for the same reasons that I’m excited, I can show you it’s important for the same reasons I think it’s important. I can give you the “flavor” of the topic, and broaden your view of the world you live in, one containing a hundred-year conversation about the nature of infinity.

My friend is right that the public will never understand everything. I’ll never understand everything either. But what we can do, what I strive to do, is to appreciate this wide weird world in all its glory. That, more than anything, is why I communicate science.

# Amplitudes 2019 Retrospective

I’m back from Amplitudes 2019, and since I have more time I figured I’d write down a few more impressions.

Amplitudes runs all the way from practical LHC calculations to almost pure mathematics, and this conference had plenty of both as well as everything in between. On the more practical side a standard “pipeline” has developed: get a large number of integrals from generalized unitarity, reduce them to a more manageable number with integration-by-parts, and then compute them with differential equations. Vladimir Smirnov and Johannes Henn presented the state of the art in this pipeline, challenging QCD calculations that required powerful methods. Others aimed to replace various parts of the pipeline. Integration-by-parts could be avoided in the numerical unitarity approach discussed by Ben Page, or alternatively with the intersection theory techniques showcased by Pierpaolo Mastrolia. More radical departures included Stefan Weinzierl’s refinement of loop-tree duality, and Jacob Bourjaily’s advocacy of prescriptive unitarity. Robert Schabinger even brought up direct integration, though I mostly viewed his talk as an independent confirmation of the usefulness of Erik Panzer’s thesis. It also showcased an interesting integral that had previously been represented by Lorenzo Tancredi and collaborators as elliptic, but turned out to be writable in terms of more familiar functions. It’s going to be interesting to see whether other such integrals arise, and whether they can be spotted in advance.

On the other end of the scale, Francis Brown was the only speaker deep enough in the culture of mathematics to insist on doing a blackboard talk. Since the conference hall didn’t actually have a blackboard, this was accomplished by projecting video of a piece of paper that he wrote on as the talk progressed. Despite the awkward setup, the talk was impressively clear, though there were enough questions that he ran out of time at the end and had to “cheat” by just projecting his notes instead. He presented a few theorems about the sort of integrals that show up in string theory. Federico Zerbini and Eduardo Casali’s talks covered similar topics, with the latter also involving intersection theory. Intersection theory also appeared in a poster from grad student Andrzej Pokraka, which overall is a pretty impressively broad showing for a part of mathematics that Sebastian Mizera first introduced to the amplitudes community less than two years ago.

Nima Arkani-Hamed’s talk on Wednesday fell somewhere in between. A series of airline mishaps brought him there only a few hours before his talk, and his own busy schedule sent him back to the airport right after the last question. The talk itself covered several topics, tied together a bit better than usual by a nice account in the beginning of what might motivate a “polytope picture” of quantum field theory. One particularly interesting aspect was a suggestion of a space, smaller than the amplituhedron, that might more accuractly the describe the “alphabet” that appears in N=4 super Yang-Mills amplitudes. If his proposal works, it may be that the infinite alphabet we were worried about for eight-particle amplitudes is actually finite. Ömer Gürdoğan’s talk mentioned this, and drew out some implications. Overall, I’m still unclear as to what this story says about whether the alphabet contains square roots, but that’s a topic for another day. My talk was right after Nima’s, and while he went over-time as always I compensated by accidentally going under-time. Overall, I think folks had fun regardless.

# Hexagon Functions VI: The Power Cosmic

I have a new paper out this week. It’s the long-awaited companion to a paper I blogged about a few months back, itself the latest step in a program that has made up a major chunk of my research.

The title is a bit of a mouthful, but I’ll walk you through it:

## The Cosmic Galois Group and Extended Steinmann Relations for Planar N = 4 SYM Amplitudes

I calculate scattering amplitudes (roughly, probabilities that elementary particles bounce off each other) in a (not realistic, and not meant to be) theory called planar N=4 super-Yang-Mills (SYM for short). I can’t summarize everything we’ve been doing here, but if you read the blog posts I linked above and some of the Handy Handbooks linked at the top of the page you’ll hopefully get a clearer picture.

We started using the Steinmann Relations a few years ago. Discovered in the 60’s, the Steinmann relations restrict the kind of equations we can use to describe particle physics. Essentially, they mean that particles can’t travel two ways at once. In this paper, we extend the Steinmann relations beyond Steinmann’s original idea. We don’t yet know if we can prove this extension works, but it seems to be true for the amplitudes we’re calculating. While we’ve presented this in talks before, this is the first time we’ve published it, and it’s one of the big results of this paper.

The other, more exotic-sounding result, has to do with something called the Cosmic Galois Group.

Évariste Galois, the famously duel-prone mathematician, figured out relations between algebraic numbers (that is, numbers you can get out of algebraic equations) in terms of a mathematical structure called a group. Today, mathematicians are interested not just in algebraic numbers, but in relations between transcendental numbers as well, specifically a kind of transcendental number called a period. These numbers show up a lot in physics, so mathematicians have been thinking about a Galois group for transcendental numbers that show up in physics, a so-called Cosmic Galois Group.

(Cosmic here doesn’t mean it has to do with cosmology. As far as I can tell, mathematicians just thought it sounded cool and physics-y. They also started out with rather ambitious ideas about it, if you want a laugh check out the last few paragraphs of this talk by Cartier.)

For us, Cosmic Galois Theory lets us study the unusual numbers that show up in our calculations. Doing this, we’ve noticed that certain numbers simply don’t show up. For example, the Riemann zeta function shows up often in our results, evaluated at many different numbers…but never evaluated at the number three. Nor does any number related to that one through the Cosmic Galois Group show up. It’s as if the theory only likes some numbers, and not others.

This weird behavior has been observed before. Mathematicians can prove it happens for some simple theories, but it even applies to the theories that describe the real world, for example to calculations of the way an electron’s path is bent by a magnetic field. Each theory seems to have its own preferred family of numbers.

For us, this has been enormously useful. We calculate our amplitudes by guesswork, starting with the right “alphabet” and then filling in different combinations, as if we’re trying all possible answers to a word jumble. Cosmic Galois Theory and Extended Steinmann have enabled us to narrow down our guess dramatically, making it much easier and faster to get to the right answer.

More generally though, we hope to contribute to mathematicians’ investigations of Cosmic Galois Theory. Our examples are more complicated than the simple theories where they currently prove things, and contain more data than the more limited results from electrons. Hopefully together we can figure out why certain numbers show up and others don’t, and find interesting mathematical principles behind the theories that govern fundamental physics.

For now, I’ll leave you with a preview of a talk I’m giving in a couple weeks’ time:

# Experimental Theoretical Physics

I was talking with some other physicists about my “Black Box Theory” thought experiment, where theorists have to compete with an impenetrable block of computer code. Even if the theorists come up with a “better” theory, that theory won’t predict anything that the code couldn’t already. If “predicting something new” is an essential part of science, then the theorists can no longer do science at all.

One of my colleagues made an interesting point: in the thought experiment, the theorists can’t predict new behaviors of reality. But they can predict new behaviors of the code.

Even when we have the right theory to describe the world, we can’t always calculate its consequences. Often we’re stuck in the same position as the theorists in the thought experiment, trying to understand the output of a theory that might as well be a black box. Increasingly, we are employing a kind of “experimental theoretical physics”. We try to predict the result of new calculations, just as experimentalists try to predict the result of new experiments.

This experimental approach seems to be a genuine cultural difference between physics and mathematics. There is such a thing as experimental mathematics, to be clear. And while mathematicians prefer proof, they’re not averse to working from a good conjecture. But when mathematicians calculate and conjecture, they still try to set a firm foundation. They’re precise about what they mean, and careful about what they imply.

“Experimental theoretical physics”, on the other hand, is much more like experimental physics itself. Physicists look for plausible patterns in the “data”, seeing if they make sense in some “physical” way. The conjectures aren’t always sharply posed, and the leaps of reasoning are often more reckless than the leaps of experimental mathematicians. We try to use intuition gleaned from a history of experiments on, and calculations about, the physical world.

There’s a real danger here, because mathematical formulas don’t behave like nature does. When we look at nature, we expect it to behave statistically. If we look at a large number of examples, we get more and more confident that they represent the behavior of the whole. This is sometimes dangerous in nature, but it’s even more dangerous in mathematics, because it’s often not clear what a good “sample” even is. Proving something is true “most of the time” is vastly different from proving it is true all of the time, especially when you’re looking at an infinity of possible examples. We can’t meet our favorite “five sigma” level of statistical confidence, or even know if we’re close.

At the same time, experimental theoretical physics has real power. Experience may be a bad guide to mathematics, but it’s a better guide to the mathematics that specifically shows up in physics. And in practice, our recklessness can accomplish great things, uncovering behaviors mathematicians would never have found by themselves.

The key is to always keep in mind that the two fields are different. “Experimental theoretical physics” isn’t mathematics, and it isn’t pretending to be, any more than experimental physics is pretending to be theoretical physics. We’re gathering data and advancing tentative explanations, but we’re fully aware that they may not hold up when examined with full rigor. We want to inspire, to raise questions and get people to think about the principles that govern the messy physical theories we use to describe our world. Experimental physics, theoretical physics, and mathematics are all part of a shared ecosystem, and each has its role to play.

# Two Loops, Five Particles

There’s a very long-term view of the amplitudes field that gets a lot of press. We’re supposed to be eliminating space and time, or rebuilding quantum field theory from scratch. We build castles in the clouds, seven-loop calculations and all-loop geometrical quantum jewels.

There’s a shorter-term problem, though, that gets much less press, despite arguably being a bigger part of the field right now. In amplitudes, we take theories and turn them into predictions, order by order and loop by loop. And when we want to compare those predictions to the real world, in most cases the best we can do is two loops and five particles.

Five particles here counts the particles coming in and going out: if two gluons collide and become three gluons, we count that as five particles, two in plus three out. Loops, meanwhile, measure the complexity of the calculation, the number of closed paths you can draw in a Feynman diagram. If you use more loops, you expect more precision: you’re approximating nature step by step.

As a field we’re pretty good at one-loop calculations, enough to do them for pretty much any number of particles. As we try for more loops though, things rapidly get harder. Already for two loops, in many cases, we start struggling. We can do better if we dial down the number of particles: there are three-particle and two-particle calculations that get up to three, four, or even five loops. For more particles though, we can’t do as much. Thus the current state of the art, the field’s short term goal: two loops, five particles.

When you hear people like me talk about crazier calculations, we’ve usually got a trick up our sleeve. Often we’re looking at a much simpler theory, one that doesn’t describe the real world. For example, I like working with a planar theory, with lots of supersymmetry. Remove even one of those simplifications, and suddenly our life becomes a lot harder. Instead of seven loops and six particles, we get genuinely excited about, well, two loops five particles.

Luckily, two loops five particles is also about as good as the experiments can measure. As the Large Hadron Collider gathers more data, it measures physics to higher and higher precision. Currently for five-particle processes, its precision is just starting to be comparable with two-loop calculations. The result has been a flurry of activity, applying everything from powerful numerical techniques to algebraic geometry to the problem, getting results that genuinely apply to the real world.

“Two loops, five particles” isn’t as cool of a slogan as “space-time is doomed”. It doesn’t get much, or any media attention. But, steadily and quietly, it’s become one of the hottest topics in the amplitudes field.

# Research Rooms, Collaboration Spaces

Math and physics are different fields with different cultures. Some of those differences are obvious, others more subtle.

I recently remembered a subtle difference I noticed at the University of Waterloo. The math building there has “research rooms”, rooms intended for groups of mathematicians to collaborate. The idea is that you invite visitors to the department, reserve the room, and spend all day with them trying to iron out a proof or the like.

Theoretical physicists collaborate like this sometimes too, but in my experience physics institutes don’t typically have this kind of “research room”. Instead, they have “collaboration spaces”. Unlike a “research room”, you don’t reserve a “collaboration space”. Typically, they aren’t even rooms: they’re a set of blackboards in the coffee room, or a cluster of chairs in the corner between two hallways. They’re open spaces, designed so that passers-by can overhear the conversation and (potentially) join in.

That’s not to say physicists never shut themselves in a room for a day (or night) to work. But when they do, it’s not usually in a dedicated space. Instead, it’s in an office, or a commandeered conference room.

Waterloo’s “research rooms” and physics institutes’ “collaboration spaces” can be used for similar purposes. The difference is in what they encourage.

The point of a “collaboration space” is to start new collaborations. These spaces are open in order to take advantage of serendipity: if you’re getting coffee or walking down the hall, you might hear something interesting and spark something new, with people you hadn’t planned to collaborate with before. Institutes with “collaboration spaces” are trying to make new connections between researchers, to be the starting point for new ideas.

The point of a “research room” is to finish a collaboration. They’re for researchers who are already collaborating, who know they’re going to need a room and can reserve it in advance. They’re enclosed in order to shut out distractions, to make sure the collaborators can sit down and focus and get something done. Institutes with “research rooms” want to give their researchers space to complete projects when they might otherwise be too occupied with other things.

I’m curious if this difference is more widespread. Do math departments generally tend to have “research rooms” or “collaboration spaces”? Are there physics departments with “research rooms”? I suspect there is a real cultural difference here, in what each field thinks it needs to encourage.

# Amplitudes in String and Field Theory at NBI

There’s a conference at the Niels Bohr Institute this week, on Amplitudes in String and Field Theory. Like the conference a few weeks back, this one was funded by the Simons Foundation, as part of Michael Green’s visit here.

The first day featured a two-part talk by Michael Green and Congkao Wen. They are looking at the corrections that string theory adds on top of theories of supergravity. These corrections are difficult to calculate directly from string theory, but one can figure out a lot about them from the kinds of symmetry and duality properties they need to have, using the mathematics of modular forms. While Michael’s talk introduced the topic with a discussion of older work, Congkao talked about their recent progress looking at this from an amplitudes perspective.

Francesca Ferrari’s talk on Tuesday also related to modular forms, while Oliver Schlotterer and Pierre Vanhove talked about a different corner of mathematics, single-valued polylogarithms. These single-valued polylogarithms are of interest to string theorists because they seem to connect two parts of string theory: the open strings that describe Yang-Mills forces and the closed strings that describe gravity. In particular, it looks like you can take a calculation in open string theory and just replace numbers and polylogarithms with their “single-valued counterparts” to get the same calculation in closed string theory. Interestingly, there is more than one way that mathematicians can define “single-valued counterparts”, but only one such definition, the one due to Francis Brown, seems to make this trick work. When I asked Pierre about this he quipped it was because “Francis Brown has good taste…either that, or String Theory has good taste.”

Wednesday saw several talks exploring interesting features of string theory. Nathan Berkovitz discussed his new paper, which makes a certain context of AdS/CFT (a duality between string theory in certain curved spaces and field theory on the boundary of those spaces) manifest particularly nicely. By writing string theory in five-dimensional AdS space in the right way, he can show that if the AdS space is small it will generate the same Feynman diagrams that one would use to do calculations in N=4 super Yang-Mills. In the afternoon, Sameer Murthy showed how localization techniques can be used in gravity theories, including to calculate the entropy of black holes in string theory, while Yvonne Geyer talked about how to combine the string theory-like CHY method for calculating amplitudes with supersymmetry, especially in higher dimensions where the relevant mathematics gets tricky.

Thursday ended up focused on field theory. Carlos Mafra was originally going to speak but he wasn’t feeling well, so instead I gave a talk about the “tardigrade” integrals I’ve been looking at. Zvi Bern talked about his work applying amplitudes techniques to make predictions for LIGO. This subject has advanced a lot in the last few years, and now Zvi and collaborators have finally done a calculation beyond what others had been able to do with older methods. They still have a way to go before they beat the traditional methods overall, but they’re off to a great start. Lance Dixon talked about two-loop five-particle non-planar amplitudes in N=4 super Yang-Mills and N=8 supergravity. These are quite a bit trickier than the planar amplitudes I’ve worked on with him in the past, in particular it’s not yet possible to do this just by guessing the answer without considering Feynman diagrams.

Today was the last day of the conference, and the emphasis was on number theory. David Broadhurst described some interesting contributions from physics to mathematics, in particular emphasizing information that the Weierstrass formulation of elliptic curves omits. Eric D’Hoker discussed how the concept of transcendentality, previously used in field theory, could be applied to string theory. A few of his speculations seemed a bit farfetched (in particular, his setup needs to treat certain rational numbers as if they were transcendental), but after his talk I’m a bit more optimistic that there could be something useful there.

# Pi Day Alternatives

On Pi Day, fans of the number pi gather to recite its digits and eat pies. It is the most famous of numerical holidays, but not the only one. Have you heard of the holidays for other famous numbers?

Tau Day: Celebrated on June 28. Observed by sitting around gloating about how much more rational one is than everyone else, then getting treated with high-energy tau leptons for terminal pedantry.

Canadian Modular Pi Day: Celebrated on February 3. Observed by confusing your American friends.

e Day: Celebrated on February 7. Observed in middle school classrooms, explaining the wonders of exponential functions and eating foods like eggs and eclairs. Once the students leave, drop tabs of ecstasy instead.

Golden Ratio Day: Celebrated on January 6. Rub crystals on pyramids and write vaguely threatening handwritten letters to every physicist you’ve heard of.

Euler Gamma Day: Celebrated on May 7 by dropping on the floor and twitching.

Riemann Zeta Daze: The first year, forget about it. The second, celebrate on January 6. The next year, January 2. After that, celebrate on New Year’s Day earlier and earlier in the morning each year until you can’t tell the difference any more.