Tag Archives: mathematics

Calabi-Yaus in Feynman Diagrams: Harder and Easier Than Expected

I’ve got a new paper up, about the weird geometrical spaces we keep finding in Feynman diagrams.

With Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, and most recently Cristian Vergu and Matthias Volk, I’ve been digging up odd mathematics in particle physics calculations. In several calculations, we’ve found that we need a type of space called a Calabi-Yau manifold. These spaces are often studied by string theorists, who hope they represent how “extra” dimensions of space are curled up. String theorists have found an absurdly large number of Calabi-Yau manifolds, so many that some are trying to sift through them with machine learning. We wanted to know if our situation was quite that ridiculous: how many Calabi-Yaus do we really need?

So we started asking around, trying to figure out how to classify our catch of Calabi-Yaus. And mostly, we just got confused.

It turns out there are a lot of different tools out there for understanding Calabi-Yaus, and most of them aren’t all that useful for what we’re doing. We went in circles for a while trying to understand how to desingularize toric varieties, and other things that will sound like gibberish to most of you. In the end, though, we noticed one small thing that made our lives a whole lot simpler.

It turns out that all of the Calabi-Yaus we’ve found are, in some sense, the same. While the details of the physics varies, the overall “space” is the same in each case. It’s a space we kept finding for our “Calabi-Yau bestiary”, but it turns out one of the “traintrack” diagrams we found earlier can be written in the same way. We found another example too, a “wheel” that seems to be the same type of Calabi-Yau.

And that actually has a sensible name

We also found many examples that we don’t understand. Add another rung to our “traintrack” and we suddenly can’t write it in the same space. (Personally, I’m quite confused about this one.) Add another spoke to our wheel and we confuse ourselves in a different way.

So while our calculation turned out simpler than expected, we don’t think this is the full story. Our Calabi-Yaus might live in “the same space”, but there are also physics-related differences between them, and these we still don’t understand.

At some point, our abstract included the phrase “this paper raises more questions than it answers”. It doesn’t say that now, but it’s still true. We wrote this paper because, after getting very confused, we ended up able to say a few new things that hadn’t been said before. But the questions we raise are if anything more important. We want to inspire new interest in this field, toss out new examples, and get people thinking harder about the geometry of Feynman integrals.

Communicating the Continuum Hypothesis

I have a friend who is shall we say, pessimistic, about science communication. He thinks it’s too much risk for too little gain, too many misunderstandings while the most important stuff is so abstract the public will never understand it anyway. When I asked him for an example, he started telling me about a professor who works on the continuum hypothesis.

The continuum hypothesis is about different types of infinity. You might have thought there was only one type of infinity, but in the nineteenth century the mathematician Georg Cantor showed there were more, the most familiar of which are countable and uncountable. If you have a countably infinite number of things, then you can “count” them, “one, two, three…”, assigning a number to each one (even if, since they’re still infinite, you never actually finish). To imagine something uncountably infinite, think of a continuum, like distance on a meter stick, where you can always look at smaller and smaller distances. Cantor proved, using various ingenious arguments, that these two types of infinity are different: the continuum is “bigger” than a mere countable infinity.

Cantor wondered if there could be something in between, a type of infinity bigger than countable and smaller than uncountable. His hypothesis (now called the continuum hypothesis) was that there wasn’t: he thought there was no type of infinite between countable and uncountable.

(If you think you have an easy counterexample, you’re wrong. In particular, fractions are countable.)

Kurt Gödel didn’t prove the continuum hypothesis, but in 1940 he showed that at least it couldn’t be disproved, which you’d think would be good enough. In 1964, though, another mathematician named Paul Cohen showed that the continuum hypothesis also can’t be proved, at least with mathematicians’ usual axioms.

In science, if something can’t be proved or disproved, then we shrug our shoulders and say we don’t know. Math is different. In math, we choose the axioms. All we have to do is make sure they’re consistent.

What Cohen and Gödel really showed is that mathematics is consistent either way: if the continuum hypothesis is true or false, the rest of mathematics still works just as well. You can add it as an extra axiom, and add-on that gives you different types of infinity but doesn’t change everyday arithmetic.

You might think that this, finally, would be the end of the story. Instead, it was the beginning of a lively debate that continues to this day. It’s a debate that touches on what mathematics is for, whether infinity is merely a concept or something out there in the world, whether some axioms are right or wrong and what happens when you change them. It involves attempts to codify intuition, arguments about which rules “make sense” that blur the boundary between philosophy and mathematics. It also involves the professor my friend mentioned, W. H. Woodin.

Now, can I explain Woodin’s research to you?

No. I don’t understand it myself, it’s far more abstract and weird than any mathematics I’ve ever touched.

Despite that, I can tell you something about it. I can tell you about the quest he’s on, its history and its relevance, what is and is not at stake. I can get you excited, for the same reasons that I’m excited, I can show you it’s important for the same reasons I think it’s important. I can give you the “flavor” of the topic, and broaden your view of the world you live in, one containing a hundred-year conversation about the nature of infinity.

My friend is right that the public will never understand everything. I’ll never understand everything either. But what we can do, what I strive to do, is to appreciate this wide weird world in all its glory. That, more than anything, is why I communicate science.

Amplitudes 2019 Retrospective

I’m back from Amplitudes 2019, and since I have more time I figured I’d write down a few more impressions.

Amplitudes runs all the way from practical LHC calculations to almost pure mathematics, and this conference had plenty of both as well as everything in between. On the more practical side a standard “pipeline” has developed: get a large number of integrals from generalized unitarity, reduce them to a more manageable number with integration-by-parts, and then compute them with differential equations. Vladimir Smirnov and Johannes Henn presented the state of the art in this pipeline, challenging QCD calculations that required powerful methods. Others aimed to replace various parts of the pipeline. Integration-by-parts could be avoided in the numerical unitarity approach discussed by Ben Page, or alternatively with the intersection theory techniques showcased by Pierpaolo Mastrolia. More radical departures included Stefan Weinzierl’s refinement of loop-tree duality, and Jacob Bourjaily’s advocacy of prescriptive unitarity. Robert Schabinger even brought up direct integration, though I mostly viewed his talk as an independent confirmation of the usefulness of Erik Panzer’s thesis. It also showcased an interesting integral that had previously been represented by Lorenzo Tancredi and collaborators as elliptic, but turned out to be writable in terms of more familiar functions. It’s going to be interesting to see whether other such integrals arise, and whether they can be spotted in advance.

On the other end of the scale, Francis Brown was the only speaker deep enough in the culture of mathematics to insist on doing a blackboard talk. Since the conference hall didn’t actually have a blackboard, this was accomplished by projecting video of a piece of paper that he wrote on as the talk progressed. Despite the awkward setup, the talk was impressively clear, though there were enough questions that he ran out of time at the end and had to “cheat” by just projecting his notes instead. He presented a few theorems about the sort of integrals that show up in string theory. Federico Zerbini and Eduardo Casali’s talks covered similar topics, with the latter also involving intersection theory. Intersection theory also appeared in a poster from grad student Andrzej Pokraka, which overall is a pretty impressively broad showing for a part of mathematics that Sebastian Mizera first introduced to the amplitudes community less than two years ago.

Nima Arkani-Hamed’s talk on Wednesday fell somewhere in between. A series of airline mishaps brought him there only a few hours before his talk, and his own busy schedule sent him back to the airport right after the last question. The talk itself covered several topics, tied together a bit better than usual by a nice account in the beginning of what might motivate a “polytope picture” of quantum field theory. One particularly interesting aspect was a suggestion of a space, smaller than the amplituhedron, that might more accuractly the describe the “alphabet” that appears in N=4 super Yang-Mills amplitudes. If his proposal works, it may be that the infinite alphabet we were worried about for eight-particle amplitudes is actually finite. Ömer Gürdoğan’s talk mentioned this, and drew out some implications. Overall, I’m still unclear as to what this story says about whether the alphabet contains square roots, but that’s a topic for another day. My talk was right after Nima’s, and while he went over-time as always I compensated by accidentally going under-time. Overall, I think folks had fun regardless.

Though I don’t know how many people recognized this guy

Hexagon Functions VI: The Power Cosmic

I have a new paper out this week. It’s the long-awaited companion to a paper I blogged about a few months back, itself the latest step in a program that has made up a major chunk of my research.

The title is a bit of a mouthful, but I’ll walk you through it:

The Cosmic Galois Group and Extended Steinmann Relations for Planar N = 4 SYM Amplitudes

I calculate scattering amplitudes (roughly, probabilities that elementary particles bounce off each other) in a (not realistic, and not meant to be) theory called planar N=4 super-Yang-Mills (SYM for short). I can’t summarize everything we’ve been doing here, but if you read the blog posts I linked above and some of the Handy Handbooks linked at the top of the page you’ll hopefully get a clearer picture.

We started using the Steinmann Relations a few years ago. Discovered in the 60’s, the Steinmann relations restrict the kind of equations we can use to describe particle physics. Essentially, they mean that particles can’t travel two ways at once. In this paper, we extend the Steinmann relations beyond Steinmann’s original idea. We don’t yet know if we can prove this extension works, but it seems to be true for the amplitudes we’re calculating. While we’ve presented this in talks before, this is the first time we’ve published it, and it’s one of the big results of this paper.

The other, more exotic-sounding result, has to do with something called the Cosmic Galois Group.

Évariste Galois, the famously duel-prone mathematician, figured out relations between algebraic numbers (that is, numbers you can get out of algebraic equations) in terms of a mathematical structure called a group. Today, mathematicians are interested not just in algebraic numbers, but in relations between transcendental numbers as well, specifically a kind of transcendental number called a period. These numbers show up a lot in physics, so mathematicians have been thinking about a Galois group for transcendental numbers that show up in physics, a so-called Cosmic Galois Group.

(Cosmic here doesn’t mean it has to do with cosmology. As far as I can tell, mathematicians just thought it sounded cool and physics-y. They also started out with rather ambitious ideas about it, if you want a laugh check out the last few paragraphs of this talk by Cartier.)

For us, Cosmic Galois Theory lets us study the unusual numbers that show up in our calculations. Doing this, we’ve noticed that certain numbers simply don’t show up. For example, the Riemann zeta function shows up often in our results, evaluated at many different numbers…but never evaluated at the number three. Nor does any number related to that one through the Cosmic Galois Group show up. It’s as if the theory only likes some numbers, and not others.

This weird behavior has been observed before. Mathematicians can prove it happens for some simple theories, but it even applies to the theories that describe the real world, for example to calculations of the way an electron’s path is bent by a magnetic field. Each theory seems to have its own preferred family of numbers.

For us, this has been enormously useful. We calculate our amplitudes by guesswork, starting with the right “alphabet” and then filling in different combinations, as if we’re trying all possible answers to a word jumble. Cosmic Galois Theory and Extended Steinmann have enabled us to narrow down our guess dramatically, making it much easier and faster to get to the right answer.

More generally though, we hope to contribute to mathematicians’ investigations of Cosmic Galois Theory. Our examples are more complicated than the simple theories where they currently prove things, and contain more data than the more limited results from electrons. Hopefully together we can figure out why certain numbers show up and others don’t, and find interesting mathematical principles behind the theories that govern fundamental physics.

For now, I’ll leave you with a preview of a talk I’m giving in a couple weeks’ time:

The font, of course, is Cosmic Sans

Experimental Theoretical Physics

I was talking with some other physicists about my “Black Box Theory” thought experiment, where theorists have to compete with an impenetrable block of computer code. Even if the theorists come up with a “better” theory, that theory won’t predict anything that the code couldn’t already. If “predicting something new” is an essential part of science, then the theorists can no longer do science at all.

One of my colleagues made an interesting point: in the thought experiment, the theorists can’t predict new behaviors of reality. But they can predict new behaviors of the code.

Even when we have the right theory to describe the world, we can’t always calculate its consequences. Often we’re stuck in the same position as the theorists in the thought experiment, trying to understand the output of a theory that might as well be a black box. Increasingly, we are employing a kind of “experimental theoretical physics”. We try to predict the result of new calculations, just as experimentalists try to predict the result of new experiments.

This experimental approach seems to be a genuine cultural difference between physics and mathematics. There is such a thing as experimental mathematics, to be clear. And while mathematicians prefer proof, they’re not averse to working from a good conjecture. But when mathematicians calculate and conjecture, they still try to set a firm foundation. They’re precise about what they mean, and careful about what they imply.

“Experimental theoretical physics”, on the other hand, is much more like experimental physics itself. Physicists look for plausible patterns in the “data”, seeing if they make sense in some “physical” way. The conjectures aren’t always sharply posed, and the leaps of reasoning are often more reckless than the leaps of experimental mathematicians. We try to use intuition gleaned from a history of experiments on, and calculations about, the physical world.

There’s a real danger here, because mathematical formulas don’t behave like nature does. When we look at nature, we expect it to behave statistically. If we look at a large number of examples, we get more and more confident that they represent the behavior of the whole. This is sometimes dangerous in nature, but it’s even more dangerous in mathematics, because it’s often not clear what a good “sample” even is. Proving something is true “most of the time” is vastly different from proving it is true all of the time, especially when you’re looking at an infinity of possible examples. We can’t meet our favorite “five sigma” level of statistical confidence, or even know if we’re close.

At the same time, experimental theoretical physics has real power. Experience may be a bad guide to mathematics, but it’s a better guide to the mathematics that specifically shows up in physics. And in practice, our recklessness can accomplish great things, uncovering behaviors mathematicians would never have found by themselves.

The key is to always keep in mind that the two fields are different. “Experimental theoretical physics” isn’t mathematics, and it isn’t pretending to be, any more than experimental physics is pretending to be theoretical physics. We’re gathering data and advancing tentative explanations, but we’re fully aware that they may not hold up when examined with full rigor. We want to inspire, to raise questions and get people to think about the principles that govern the messy physical theories we use to describe our world. Experimental physics, theoretical physics, and mathematics are all part of a shared ecosystem, and each has its role to play.

Two Loops, Five Particles

There’s a very long-term view of the amplitudes field that gets a lot of press. We’re supposed to be eliminating space and time, or rebuilding quantum field theory from scratch. We build castles in the clouds, seven-loop calculations and all-loop geometrical quantum jewels.

There’s a shorter-term problem, though, that gets much less press, despite arguably being a bigger part of the field right now. In amplitudes, we take theories and turn them into predictions, order by order and loop by loop. And when we want to compare those predictions to the real world, in most cases the best we can do is two loops and five particles.

Five particles here counts the particles coming in and going out: if two gluons collide and become three gluons, we count that as five particles, two in plus three out. Loops, meanwhile, measure the complexity of the calculation, the number of closed paths you can draw in a Feynman diagram. If you use more loops, you expect more precision: you’re approximating nature step by step.

As a field we’re pretty good at one-loop calculations, enough to do them for pretty much any number of particles. As we try for more loops though, things rapidly get harder. Already for two loops, in many cases, we start struggling. We can do better if we dial down the number of particles: there are three-particle and two-particle calculations that get up to three, four, or even five loops. For more particles though, we can’t do as much. Thus the current state of the art, the field’s short term goal: two loops, five particles.

When you hear people like me talk about crazier calculations, we’ve usually got a trick up our sleeve. Often we’re looking at a much simpler theory, one that doesn’t describe the real world. For example, I like working with a planar theory, with lots of supersymmetry. Remove even one of those simplifications, and suddenly our life becomes a lot harder. Instead of seven loops and six particles, we get genuinely excited about, well, two loops five particles.

Luckily, two loops five particles is also about as good as the experiments can measure. As the Large Hadron Collider gathers more data, it measures physics to higher and higher precision. Currently for five-particle processes, its precision is just starting to be comparable with two-loop calculations. The result has been a flurry of activity, applying everything from powerful numerical techniques to algebraic geometry to the problem, getting results that genuinely apply to the real world.

“Two loops, five particles” isn’t as cool of a slogan as “space-time is doomed”. It doesn’t get much, or any media attention. But, steadily and quietly, it’s become one of the hottest topics in the amplitudes field.

Research Rooms, Collaboration Spaces

Math and physics are different fields with different cultures. Some of those differences are obvious, others more subtle.

I recently remembered a subtle difference I noticed at the University of Waterloo. The math building there has “research rooms”, rooms intended for groups of mathematicians to collaborate. The idea is that you invite visitors to the department, reserve the room, and spend all day with them trying to iron out a proof or the like.

Theoretical physicists collaborate like this sometimes too, but in my experience physics institutes don’t typically have this kind of “research room”. Instead, they have “collaboration spaces”. Unlike a “research room”, you don’t reserve a “collaboration space”. Typically, they aren’t even rooms: they’re a set of blackboards in the coffee room, or a cluster of chairs in the corner between two hallways. They’re open spaces, designed so that passers-by can overhear the conversation and (potentially) join in.

That’s not to say physicists never shut themselves in a room for a day (or night) to work. But when they do, it’s not usually in a dedicated space. Instead, it’s in an office, or a commandeered conference room.

Waterloo’s “research rooms” and physics institutes’ “collaboration spaces” can be used for similar purposes. The difference is in what they encourage.

The point of a “collaboration space” is to start new collaborations. These spaces are open in order to take advantage of serendipity: if you’re getting coffee or walking down the hall, you might hear something interesting and spark something new, with people you hadn’t planned to collaborate with before. Institutes with “collaboration spaces” are trying to make new connections between researchers, to be the starting point for new ideas.

The point of a “research room” is to finish a collaboration. They’re for researchers who are already collaborating, who know they’re going to need a room and can reserve it in advance. They’re enclosed in order to shut out distractions, to make sure the collaborators can sit down and focus and get something done. Institutes with “research rooms” want to give their researchers space to complete projects when they might otherwise be too occupied with other things.

I’m curious if this difference is more widespread. Do math departments generally tend to have “research rooms” or “collaboration spaces”? Are there physics departments with “research rooms”? I suspect there is a real cultural difference here, in what each field thinks it needs to encourage.