Tag Archives: theoretical physics

What’s in a Conjecture? An ER=EPR Example

A few weeks back, Caltech’s Institute of Quantum Information and Matter released a short film titled Quantum is Calling. It’s the second in what looks like will become a series of pieces featuring Hollywood actors popularizing ideas in physics. The first used the game of Quantum Chess to talk about superposition and entanglement. This one, featuring Zoe Saldana, is about a conjecture by Juan Maldacena and Leonard Susskind called ER=EPR. The conjecture speculates that pairs of entangled particles (as investigated by Einstein, Podolsky, and Rosen) are in some sense secretly connected by wormholes (or Einstein-Rosen bridges).

The film is fun, but I’m not sure ER=EPR is established well enough to deserve this kind of treatment.

At this point, some of you are nodding your heads for the wrong reason. You’re thinking I’m saying this because ER=EPR is a conjecture.

I’m not saying that.

The fact of the matter is, conjectures play a very important role in theoretical physics, and “conjecture” covers a wide range. Some conjectures are supported by incredibly strong evidence, just short of mathematical proof. Others are wild speculations, “wouldn’t it be convenient if…” ER=EPR is, well…somewhere in the middle.

Most popularizers don’t spend much effort distinguishing things in this middle ground. I’d like to talk a bit about the different sorts of evidence conjectures can have, using ER=EPR as an example.

octopuswormhole_v1

Our friendly neighborhood space octopus

The first level of evidence is motivation.

At its weakest, motivation is the “wouldn’t it be convenient if…” line of reasoning. Some conjectures never get past this point. Hawking’s chronology protection conjecture, for instance, points out that physics (and to some extent logic) has a hard time dealing with time travel, and wouldn’t it be convenient if time travel was impossible?

For ER=EPR, this kind of motivation comes from the black hole firewall paradox. Without going into it in detail, arguments suggested that the event horizons of older black holes would resemble walls of fire, incinerating anything that fell in, in contrast with Einstein’s picture in which passing the horizon has no obvious effect at the time. ER=EPR provides one way to avoid this argument, making event horizons subtle and smooth once more.

Motivation isn’t just “wouldn’t it be convenient if…” though. It can also include stronger arguments: suggestive comparisons that, while they could be coincidental, when put together draw a stronger picture.

In ER=EPR, this comes from certain similarities between the type of wormhole Maldacena and Susskind were considering, and pairs of entangled particles. Both connect two different places, but both do so in an unusually limited way. The wormholes of ER=EPR are non-traversable: you cannot travel through them. Entangled particles can’t be traveled through (as you would expect), but more generally can’t be communicated through: there are theorems to prove it. This is the kind of suggestive similarity that can begin to motivate a conjecture.

(Amusingly, the plot of the film breaks this in both directions. Keanu Reeves can neither steal your cat through a wormhole, nor send you coded messages with entangled particles.)

rjxhfqj

Nor live forever as the portrait in his attic withers away

Motivation is a good reason to investigate something, but a bad reason to believe it. Luckily, conjectures can have stronger forms of evidence. Many of the strongest conjectures are correspondences, supported by a wealth of non-trivial examples.

In science, the gold standard has always been experimental evidence. There’s a reason for that: when you do an experiment, you’re taking a risk. Doing an experiment gives reality a chance to prove you wrong. In a good experiment (a non-trivial one) the result isn’t obvious from the beginning, so that success or failure tells you something new about the universe.

In theoretical physics, there are things we can’t test with experiments, either because they’re far beyond our capabilities or because the claims are mathematical. Despite this, the overall philosophy of experiments is still relevant, especially when we’re studying a correspondence.

“Correspondence” is a word we use to refer to situations where two different theories are unexpectedly computing the same thing. Often, these are very different theories, living in different dimensions with different sorts of particles. With the right “dictionary”, though, you can translate between them, doing a calculation in one theory that matches a calculation in the other one.

Even when we can’t do non-trivial experiments, then, we can still have non-trivial examples. When the result of a calculation isn’t obvious from the beginning, showing that it matches on both sides of a correspondence takes the same sort of risk as doing an experiment, and gives the same sort of evidence.

Some of the best-supported conjectures in theoretical physics have this form. AdS/CFT is technically a conjecture: a correspondence between string theory in a hyperbola-shaped space and my favorite theory, N=4 super Yang-Mills. Despite being a conjecture, the wealth of nontrivial examples is so strong that it would be extremely surprising if it turned out to be false.

ER=EPR is also a correspondence, between entangled particles on the one hand and wormholes on the other. Does it have nontrivial examples?

Some, but not enough. Originally, it was based on one core example, an entangled state that could be cleanly matched to the simplest wormhole. Now, new examples have been added, covering wormholes with electric fields and higher spins. The full “dictionary” is still unclear, with some pairs of entangled particles being harder to describe in terms of wormholes. So while this kind of evidence is being built, it isn’t as solid as our best conjectures yet.

I’m fine with people popularizing this kind of conjecture. It deserves blog posts and press articles, and it’s a fine idea to have fun with. I wouldn’t be uncomfortable with the Bohemian Gravity guy doing a piece on it, for example. But for the second installment of a star-studded series like the one Caltech is doing…it’s not really there yet, and putting it there gives people the wrong idea.

I hope I’ve given you a better idea of the different types of conjectures, from the most fuzzy to those just shy of certain. I’d like to do this kind of piece more often, though in future I’ll probably stick with topics in my sub-field (where I actually know what I’m talking about 😉 ). If there’s a particular conjecture you’re curious about, ask in the comments!

A Tale of Two Archives

When it comes to articles about theoretical physics, I have a pet peeve, one made all the more annoying by the fact that it appears even in pieces that are otherwise well written. It involves the following disclaimer:

“This article has not been peer-reviewed.”

Here’s the thing: if you’re dealing with experiments, peer review is very important. Plenty of experiments have subtle problems with their methods, enough that it’s important to have a group of experts who can check them. In experimental fields, you really shouldn’t trust things that haven’t been through a journal yet: there’s just a lot that can go wrong.

In theoretical physics, though, peer review is important for different reasons. Most papers are mathematically rigorous enough that they’re not going to be wrong per se, and most of the ways they could be wrong won’t be caught by peer review. While peer review sometimes does catch mistakes, much more often it’s about assessing the significance of a result. Peer review determines whether a result gets into a prestigious journal or a less prestigious one, which in turn matters for job and grant applications.

As such, it doesn’t really make sense for a journalist to point out that a theoretical physics paper hasn’t been peer reviewed yet. If you think it’s important enough to write an article about, then you’ve already decided it’s significant: peer review wasn’t going to tell you anything else.

We physicists post our papers to arXiv, a free-to-access paper repository, before submitting them to journals. While arXiv does have some moderation, it’s not much: pretty much anyone in the field can post whatever they want.

This leaves a lot of people confused. In that sort of system, how do we know which papers to trust?

Let’s compare to another archive: Archive of Our Own, or AO3 for short.

Unlike arXiv, AO3 hosts not physics, but fanfiction. However, like arXiv it’s quite lightly moderated and free to access. On arXiv you want papers you can trust, on AO3 you want stories you enjoy. In each case, if anyone can post, how do you find them?

The first step is filtering. AO3 and arXiv both have systems of tags and subject headings. The headings on arXiv are simpler and more heavily moderated than those on AO3, but they both serve the purpose of letting people filter out the subjects, whether scientific or fictional, that they find interesting. If you’re interested in astrophysics, try astro-ph on arXiv. If you want Harry Potter fanfiction, try the “Harry Potter – J.K. Rowling” tag on AO3.

Beyond that, it helps to pay attention to authors. When an author has written something you like, it’s worth it not only to keep up with other things they write, but to see which other authors they like and pay attention to them as well. That’s true whether the author is Juan Maldacena or your favorite source of Twilight fanfic.

Even if you follow all of this, you can’t trust every paper you find on arXiv. You also won’t enjoy everything you dig up on AO3. Either way, publication (in journals or books) won’t solve your problem: both are an additional filter, but not an infallible one. Judgement is still necessary.

This is all to say that “this article has not been peer-reviewed” can be a useful warning, but often isn’t. In theoretical physics, knowing who wrote an article and what it’s about will often tell you much more than whether or not it’s been peer-reviewed yet.

Hexagon Functions Meet the Amplituhedron: Thinking Positive

I finished a new paper recently, it’s up on arXiv now.

This time, we’re collaborating with Jaroslav Trnka, of Amplituhedron fame, to investigate connections between the Amplituhedron and our hexagon function approach.

The Amplituhedron is a way to think about scattering amplitudes in our favorite toy model theory, N=4 super Yang-Mills. Specifically, it describes amplitudes as the “volume” of some geometric space.

Here’s something you might expect: if something is a volume, it should be positive, right? You can’t have a negative amount of space. So you’d naturally guess that these scattering amplitudes, if they’re really the “volume” of something, should be positive.

“Volume” is in quotation marks there for a reason, though, because the real story is a bit more complicated. The Amplituhedron isn’t literally the volume of some space, there are a bunch of other mathematical steps between the geometric story of the Amplituhedron on the one end and the final amplitude on the other. If it was literally a volume, calculating it would be quite a bit easier: mathematicians have gotten very talented at calculating volumes. But if it was literally a volume, it would have to be positive.

What our paper demonstrates is that, in the right regions (selected by the structure of the Amplituhedron), the amplitudes we’ve calculated so far are in fact positive. That first, basic requirement for the amplitude to actually literally be a volume is satisfied.

Of course, this doesn’t prove anything. There’s still a lot of work to do to actually find the thing the amplitude is the volume of, and this isn’t even proof that such a thing exists. It’s another, small piece of evidence. But it’s a reassuring one, and it’s nice to begin to link our approach with the Amplituhedron folks.

This week was the 75th birthday of John Schwarz, one of the founders of string theory and a discoverer of N=4 super Yang-Mills. We’ve dedicated the paper to him. His influence on the field, like the amplitudes of N=4 themselves, has been consistently positive.

What If the Field Is Doomed?

Around Halloween, I have a tradition of exploring the spooky and/or scary side of physics (sometimes rather tenuously). This time, I want to talk about something particle physicists find scary: the future of the field.

For a long time, now, our field has centered around particle colliders. Early colliders confirmed the existence of quarks and gluons, and populated the Standard Model with a wealth of particles, some expected and some not. Now, an enormous amount of effort has poured into the Large Hadron Collider, which found the Higgs…and so far, nothing else.

Plans are being discussed for an even larger collider, in Europe or China, but it’s not clear that either will be funded. Even if the case for new physics isn’t as strong in such a collider, there are properties of the Higgs that the LHC won’t be able to measure, things it’s important to check with a more powerful machine.

That’s the case we’ll have to make to the public, if we want such a collider to be built. But in addition to the scientific reasons, there are selfish reasons to hope for a new collider. Without one, it’s not clear the field can survive in its current form.

By “the field”, here, I don’t just mean those focused on making predictions for collider physics. My work isn’t plugged particularly tightly into the real world, the same is true of most string theorists. Naively, you’d think it wouldn’t matter to us if a new collider gets built.

The trouble is, physics is interconnected. We may not all make predictions about the world, but the purpose of the tools we build and concepts we explore is to eventually make contact. On grant applications, we talk about that future, one that leads not just to understanding the mathematics and models we use but to understanding reality. And for a long while, a major theme in those grant applications has been collider physics.

Different sub-fields are vulnerable to this in different ways. Surprisingly, the people who directly make predictions for the LHC might have it easiest. Many of them can pivot, and make predictions for cosmological observations and cheaper dark matter detection experiments. Quite a few are already doing so.

It’s harder for my field, for amplitudeology. We try to push the calculation techniques of theoretical physics to greater and greater precision…but without colliders, there are fewer experiments that can match that precision. Cosmological observations and dark matter detection won’t need four-loop calculations.

If there isn’t a next big collider, our field won’t dry up overnight. Our work is disconnected enough, at a far enough remove from reality, that it takes time for that sort of change to be reflected in our funding. Optimistically, this gives people enough time to change gears and alter their focus to the less collider-dependent parts of the field. Pessimistically, it means people would be working on a zombie field, shambling around in a field that is already dead but can’t admit it.

z-nation-field-of-zombies

Well I had to use some Halloween imagery

My hope is that this won’t happen. Even if the new colliders don’t get approved and collider physics goes dormant, I’d like to think my colleagues are adaptable enough to stay useful as the world’s demands change. But I’m young in this field, I haven’t seen it face these kinds of challenges before. And so, I worry.

Four Gravitons in China

I’m in China this week, at the School and Workshop on Amplitudes in Beijing 2016.

img_20161018_085714

It’s a little chilly this time of year, so the dragons have accessorized

A few years back, I mentioned that there didn’t seem to be many amplitudeologists in Asia. That’s changed quite a lot over just the last few years. Song He and Yu-tin Huang went from postdocs in the west to faculty positions in China and Taiwan, respectively, while Bo Feng’s group in China has expanded. As a consequence, there’s now a substantial community here. This is the third “Amplitudes in Asia” conference, with past years meeting in Hong Kong and Taipei.

The “school” part of the conference was last week. I wasn’t here, but the students here seem to have enjoyed it a lot. This week is the “workshop” part, and there have been talks on a variety of parts of amplitudes. Nima showed up on Wednesday and managed to talk for his usual impressively long amount of time, finishing with a public lecture about the future of physics. The talk was ostensibly about why China should build the next big collider, but for the most part it ended up as a more general talk about exciting open questions in high energy physics. The talks were recorded, so they should be online at some point.

Congratulations to Thouless, Haldane, and Kosterlitz!

I’m traveling this week in sunny California, so I don’t have time for a long post, but I thought I should mention that the 2016 Nobel Prize in Physics has been announced. Instead of going to LIGO, as many had expected, it went to David Thouless, Duncan Haldane, and Michael Kosterlitz. LIGO will have to wait for next year.

Thouless, Haldane, and Kosterlitz are condensed matter theorists. While particle physics studies the world at the smallest scales and astrophysics at the largest, condensed matter physics lives in between, explaining the properties of materials on an everyday scale. This can involve inventing new materials, or unusual states of matter, with superconductors being probably the most well-known to the public. Condensed matter gets a lot less press than particle physics, but it’s a much bigger field: overall, the majority of physicists study something under the condensed matter umbrella.

This year’s Nobel isn’t for a single discovery. Rather, it’s for methods developed over the years that introduced topology into condensed matter physics.

Topology often gets described in terms of coffee cups and donuts. In topology, two shapes are the same if you can smoothly change one into another, so a coffee cup and a donut are really the same shape.

mug_and_torus_morphMost explanations stop there, which makes it hard to see how topology could be useful for physics. The missing part is that topology studies not just which shapes can smoothly change into each other, but which things, in general, can change smoothly into each other.

That’s important, because in physics most changes are smooth. If two things can’t change smoothly into each other, something special needs to happen to bridge the gap between them.

There are a lot of different sorts of implications this can have. Topology means that some materials can be described by a number that’s conserved no matter what (smooth) changes occur, leading to experiments that see specific “levels” rather than a continuous range of outcomes. It means that certain physical setups can’t change smoothly into other ones, which protects those setups from changing: an idea people are investigating in the quest to build a quantum computer, where extremely delicate quantum states can be disrupted by even the slightest change.

Overall, topology has been enormously important in physics, and Thouless, Haldane, and Kosterlitz deserve a significant chunk of the credit for bringing it into the spotlight.

The Parable of the Entanglers and the Bootstrappers

There’s been some buzz around a recent Quanta article by K. C. Cole, The Strange Second Life of String Theory. I found it a bit simplistic of a take on the topic, so I thought I’d offer a different one.

String theory has been called the particle physicist’s approach to quantum gravity. Other approaches use the discovery of general relativity as a model: they’re looking for a big conceptual break from older theories. String theory, in contrast, starts out with a technical problem (naive quantum gravity calculations that give infinity) proposes physical objects that could solve the problem (strings, branes), and figures out which theories of these objects are consistent with existing data (originally the five superstring theories, now all understood as parts of M theory).

That approach worked. It didn’t work all the way, because regardless of whether there are indirect tests that can shed light on quantum gravity, particle physics-style tests are far beyond our capabilities. But in some sense, it went as far as it can: we’ve got a potential solution to the problem, and (apart from some controversy about the cosmological constant) it looks consistent with observations. Until actual evidence surfaces, that’s the end of that particular story.

When people talk about the failure of string theory, they’re usually talking about its aspirations as a “theory of everything”. String theory requires the world to have eleven dimensions, with seven curled up small enough that we can’t observe them. Different arrangements of those dimensions lead to different four-dimensional particles. For a time, it was thought that there would be only a few possible arrangements: few enough that people could find the one that describes the world and use it to predict undiscovered particles.

That particular dream didn’t work out. Instead, it became apparent that there were a truly vast number of different arrangements of dimensions, with no unique prediction likely to surface.

By the time I took my first string theory course in grad school, all of this was well established. I was entering a field shaped by these two facts: string theory’s success as a particle-physics style solution to quantum gravity, and its failure as a uniquely predictive theory of everything.

The quirky thing about science: sociologically, success and failure look pretty similar. Either way, it’s time to find a new project.

A colleague of mine recently said that we’re all either entanglers or bootstrappers. It was a joke, based on two massive grants from the Simons Foundation. But it’s also a good way to summarize two different ways string theory has moved on, from its success and from its failure.

The entanglers start from string theory’s success and say, what’s next?

As it turns out, a particle-physics style understanding of quantum gravity doesn’t tell you everything you need to know. Some of the big conceptual questions the more general relativity-esque approaches were interested in are still worth asking. Luckily, string theory provides tools to answer them.

Many of those answers come from AdS/CFT, the discovery that string theory in a particular warped space-time is dual (secretly the same theory) to a more particle-physics style theory on the edge of that space-time. With that discovery, people could start understanding properties of gravity in terms of properties of particle-physics style theories. They could use concepts like information, complexity, and quantum entanglement (hence “entanglers”) to ask deeper questions about the structure of space-time and the nature of black holes.

The bootstrappers, meanwhile, start from string theory’s failure and ask, what can we do with it?

Twisting up the dimensions of string theory yields a vast number of different arrangements of particles. Rather than viewing this as a problem, why not draw on it as a resource?

“Bootstrappers” explore this space of particle-physics style theories, using ones with interesting properties to find powerful calculation tricks. The name comes from the conformal bootstrap, a technique that finds conformal theories (roughly: theories that are the same at every scale) by “pulling itself by its own boostraps”, using nothing but a kind of self-consistency.

Many accounts, including Cole’s, attribute people like the boostrappers to AdS/CFT as well, crediting it with inspiring string theorists to take a closer look at particle physics-style theories. That may be true in some cases, but I don’t think it’s the whole story: my subfield is bootstrappy, and while it has drawn on AdS/CFT that wasn’t what got it started. Overall, I think it’s more the case that the tools of string theory’s “particle physics-esque approach”, like conformal theories and supersymmetry, ended up (perhaps unsurprisingly) useful for understanding particle physics-style theories.

Not everyone is a “boostrapper” or an “entangler”, even in the broad sense I’m using the words. The two groups also sometimes overlap. Nevertheless, it’s a good way to think about what string theorists are doing these days. Both of these groups start out learning string theory: it’s the only way to learn about AdS/CFT, and it introduces the bootstrappers to a bunch of powerful particle physics tools all in one course. Where they go from there varies, and can be more or less “stringy”. But it’s research that wouldn’t have existed without string theory to get it started.

So You Want to Prove String Theory, Part II: How Can QCD Be a String Theory?

A couple weeks back, I had a post about Nima Arkani-Hamed’s talk at Strings 2016. Nima and his collaborators were trying to find what sorts of scattering amplitudes (formulas that calculate the chance that particles scatter off each other) are allowed in a theory of quantum gravity. Their goal was to show that, with certain assumptions, string theory gives the only consistent answer.

At the time, my old advisor Michael Douglas suggested that I might find Zohar Komargodski’s talk more interesting. Now that I’ve finally gotten around to watching it, I agree. The story is cleaner, more conclusive…and it gives me an excuse to say something else I’ve been meaning to talk about.

Zohar Komargodski has a track record of deriving interesting results that are true not just for the sorts of toy models we like to work with but for realistic theories as well. He’s collaborating with amplitudes miracle-worker Simon Caron-Huot (who I’ve collaborated with recently), Amit Sever (one of the integrability wizards who came up with the POPE program) and Alexander Zhiboedov, whose name seems to show up all over the place. Overall, the team is 100% hot young talent, which tends to be a recipe for success.

While Nima’s calculation focuses on gravity, Zohar and company are asking a broader question. They’re looking at any theory with particles of high spin and nonzero mass. Like Nima, they’re looking at scattering amplitudes, in the limit that the forces involved are weak. Unlike Nima, they’re focusing on a particular limit: rather than trying to fix the full form of the amplitude, they’re interested in how it behaves for extreme, unphysical values for the particles’ momenta. Despite being unphysical, this limit can reveal something about how the theory works.

What they figured out is that, for the sorts of theories they’re looking at, the amplitude has to take a particular form in their unphysical limit. In particular, it takes a form that indicates the presence of strings.

What sort of theories are they looking at? What theories have “particles of high spin and nonzero mass”? Well, some are string theories. Others are Yang-Mills theories … theories similar to QCD.

For the experts, I encourage you to watch Zohar’s talk or read the paper for more detail. It’s a fun story that showcases how very general constraints on scattering amplitudes can translate into quite specific statements.

For the non-experts, though, there’s something that may already be confusing. When I’ve talked about Yang-Mills theories before, I’ve talked about them in terms of particles of spin 1. Where did these “higher spin” particles come from? And where are the strings? How can there be strings in a theory that I’ve described as “similar to QCD”?

If I just stuck to the higher spin particles, things could almost stay familiar. The fundamental particles of Yang-Mills theories have spin 1, but these particles can combine into composite particles, which can have higher spin and higher mass. That should be intuitive: in some sense, it’s just like protons, neutrons, and electrons combining to form atoms.

What about the strings? I’ve actually talked about that before, but I’d like to try out a new analogy. Have you ever heard of Conway’s Game of Life?

pic288405_md

Not this one!

gospers_glider_gun

This one!

Conway’s Game of Life starts with a grid of black and white squares, and evolves in steps, with each square’s color determined by the color of adjacent squares in the last step. “Fundamentally”, the game is just those rules. In practice, though, structure can emerge: a zoo of self-propagating creatures that dance across the screen.

The strings that can show up in Yang-Mills theories are like this. They aren’t introduced directly in the definition of the theory. Instead, they’re consequences: structures that form when you let the rules evolve and see what they create. They’re another description of the theory, one with its own advantages.

When I tell people I’m a theoretical physicist, they inevitably ask me “Have any of your theories been tested?” They’re operating from one idea of what a theoretical physicist does: propose new theories to describe the world, based on available evidence. Lots of theorists do that, they’re called phenomenologists, but it’s not what I do, or what most theorists I interact with day-to-day do.

So I describe what I do, how I test new mathematical techniques to make particle physics calculations faster. And in general, that’s pretty easy for people to understand. Just as they can imagine people out there testing theories, they can imagine people who work to support the others, making tools to make their work easier. But while that’s what I do, it’s not the best description of what most of my colleagues do.

What most theorists I know do is like finding new animals in Conway’s game of life. They start with theories for which we know the rules: well-tested theories like QCD, or well-studied proposals like string theory. They ask themselves, not how they can change the rules, but what results the rules have. They look for structures, and in doing so find new perspectives, learning to see the animals that live on Conway’s black and white grid. (This is something I’ve gestured at before, but this seems like a cleaner framing.)

Doing that, theorists have seen strings in the structure of QCD-like theories. And now Zohar and collaborators have a clean argument that the structures others have seen should show up, not only there, but in a broader class of theories.

This isn’t about whether the world is fundamentally described by string theory, ten dimensions and all. That’s an entirely different topic. What it is is a question about what sorts of structures emerge when we try to describe the world. What it does is show that strings are, in some sense (and, as for Nima, [with some conditions]) inevitable, that they come out of our rules even if we don’t expect them to.

Hexagon Functions IV: Steinmann Harder

It’s paper season! I’ve got another paper out this week, this one a continuation of the hexagon function story.

The story so far:

My collaborators and I have been calculating “six-particle” (two particles collide, four come out, or three collide, three come out…) scattering amplitudes (probabilities that particles scatter) in N=4 super Yang-Mills. We calculate them starting with an ansatz (a guess, basically) made up of a type of functions called hexagon functions: “hexagon” because they’re the right functions for six-particle scattering. We then narrow down our guess by bringing in other information: for example, if two particles are close to lining up, our answer needs to match the one calculated with something called the POPE, so we can throw out guesses that don’t match that. In the end, only one guess survives, and we can check that it’s the right answer.

So what’s new this time?

More loops:

In quantum field theory, most of our calculations are approximate, and we measure the precision in something called loops. The more loops, the closer we are to the exact result, and the more complicated the calculation becomes.

This time, we’re at five loops of precision. To give you an idea of how complicated that is: I store these functions in text files. We’ve got a new, more efficient notation for them. With that, the two-loop functions fit into files around 20KB. Three loops, 500KB. Four, 15MB. And five? 300MB.

So if you want to imagine five loops, think about something that needs to be stored in a 300MB text file.

More insight:

We started out having noticed some weird new symmetries of our old results, so we brought in Simon Caron-Huot, expert on weird new symmetries. He couldn’t figure out that one…but he did notice an entirely different symmetry, one that turned out to have been first noticed in the 60’s, called the Steinmann relations.

The core idea of the Steinmann relations goes back to the old method of calculating amplitudes, with Feynman diagrams. In Feynman diagrams, lines represent particles traveling from one part of the diagram to the other. In a simplified form, the Steinmann conditions are telling us that diagrams can’t take two mutually exclusive shapes at the same time. If three particles are going one way, they can’t also be going another way.

steinmann2

With the Steinmann relations, things suddenly became a whole lot easier. Calculations that we had taken months to do, Simon was now doing in a week. Finally we could narrow things down and get the full answer, and we could do it with clear, physics-based rules.

More bootstrap:

In physics, when we call something a “bootstrap” it’s in reference to the phrase “pull yourself up by your own boostraps”. That impossible task, lifting yourself  with no outside support, is essentially what we do when we “bootstrap”: we do a calculation with no external input, simply by applying general rules.

In the past, our hexagon function calculations always had some sort of external data. For the first time, with the Steinmann conditions, we don’t need that. Every constraint, everything we do to narrow down our guess, is either a general rule or comes out of our lower-loop results. We never need detailed information from anywhere else.

This is big, because it might allow us to avoid loops altogether. Normally, each loop is an approximation, narrowed down using similar approximations from others. If we don’t need the approximations from others, though, then we might not need any approximations at all. For this particular theory, for this toy model, we might be able to actually calculate scattering amplitudes exactly, for any strength of forces and any energy. Nobody’s been able to do that for this kind of theory before.

We’re already making progress. We’ve got some test cases, simpler quantities that we can understand with no approximations. We’re starting to understand the tools we need, the pieces of our bootstrap. We’ve got a real chance, now, of doing something really fundamentally new.

So keep watching this blog, keep your eyes on arXiv: big things are coming.

A Papal Resummation

I’ve got a new paper up this week. This one is a collaboration with Ho Tat Lam, who just finished a Master’s degree at Perimeter and will be at Princeton in the fall.

A while back, I mentioned that Perimeter’s Master’s program was holding a Winter School up in the wilderness of Ontario. In between skiing and ice skating, I worked with a group of students attempting to sum up something called the Pentagon Operator Product Expansion, or POPE.

SpacePope

The (Rapidity) Space Pope, for a joke only three people will get

While we didn’t finish the job there, we made a lot of progress, and Ho Tat and I kept working on it.

This is the first time I’ve been the senior member of a collaboration, and it was an interesting experience. There’s a lot that you feel like you know perfectly well until you sit down and try to teach it. Getting things out of my head and into someone else’s is a challenge, but it’s one I’m getting better at.

The POPE is an alternate way of calculating scattering amplitudes in N=4 super Yang-Mills. Rather than going loop by loop (and approximating the forces involved as small), it’s a sum of terms that approximate the energy as small. If all of those terms could be added up, we could calculate amplitudes in this theory for any energy and any strength of force.

We can’t do that in general (yet). What we can do is bring back the loop by loop approximation, but keep the sum in energy. If we add up that sum, we can check it against the known loop by loop results, and see if our calculation is faster. Along the way, we learn a bit about how these sums add up to give us polylogarithms.

Ho Tat and I have done the first loop. Going further isn’t just a bigger calculation, there are new challenges we’ll have to face. But I think we’ve got a shot at it.