Theoretical Uncertainty and Uncertain Theory

Yesterday, Fermilab’s Muon g-2 experiment announced a new measurement of the magnetic moment of the muon, a number which describes how muons interact with magnetic fields. For what might seem like a small technical detail, physicists have been very excited about this measurement because it’s a small technical detail that the Standard Model seems to get wrong, making it a potential hint of new undiscovered particles. Quanta magazine has a great piece on the announcement, which explains more than I will here, but the upshot is that there are two different calculations on the market that attempt to predict the magnetic moment of the muon. One of them, using older methods, disagrees with the experiment. The other, with a new approach, agrees. The question then becomes, which calculation was wrong? And why?

What does it mean for a prediction to match an experimental result? The simple, wrong, answer is that the numbers must be equal: if you predict “3”, the experiment has to measure “3”. The reason why this is wrong is that in practice, every experiment and every prediction has some uncertainty. If you’ve taken a college physics class, you’ve run into this kind of uncertainty in one of its simplest forms, measurement uncertainty. Measure with a ruler, and you can only confidently measure down to the smallest divisions on the ruler. If you measure 3cm, but your ruler has ticks only down to a millimeter, then what you’re measuring might be as large as 3.1cm or as small as 2.9 cm. You just don’t know.

This uncertainty doesn’t mean you throw up your hands and give up. Instead, you estimate the effect it can have. You report, not a measurement of 3cm, but of 3cm plus or minus 1mm. If the prediction was 2.9cm, then you’re fine: it falls within your measurement uncertainty.

Measurements aren’t the only thing that can be uncertain. Predictions have uncertainty too, theoretical uncertainty. Sometimes, this comes from uncertainty on a previous measurement: if you make a prediction based on that experiment that measured 3cm plus or minus 1mm, you have to take that plus or minus into account and estimate its effect (we call this propagation of errors). Sometimes, the uncertainty comes instead from an approximation you’re making. In particle physics, we sometimes approximate interactions between different particles with diagrams, beginning with the simplest diagrams and adding on more complicated ones as we go. To estimate the uncertainty there, we estimate the size of the diagrams we left out, the more complicated ones we haven’t calculated yet. Other times, that approximation doesn’t work, and we need to use a different approximation, treating space and time as a finite grid where we can do computer simulations. In that case, you can estimate your uncertainty based on how small you made your grid. The new approach to predicting the muon magnetic moment uses that kind of approximation.

There’s a common thread in all of these uncertainty estimates: you don’t expect to be too far off on average. Your measurements won’t be perfect, but they won’t all be screwed up in the same way either: chances are, they will randomly be a little below or a little above the truth. Your calculations are similar: whether you’re ignoring complicated particle physics diagrams or the spacing in a simulated grid, you can treat the difference as something small and random. That randomness means you can use statistics to talk about your errors: you have statistical uncertainty. When you have statistical uncertainty, you can estimate, not just how far off you might get, but how likely it is you ended up that far off. In particle physics, we have very strict standards for this kind of thing: to call something new a discovery, we demand that it is so unlikely that it would only show up randomly under the old theory roughly one in a million times. The muon magnetic moment isn’t quite up to our standards for a discovery yet, but the new measurement brought it closer.

The two dueling predictions for the muon’s magnetic moment both estimate some amount of statistical uncertainty. It’s possible that the two calculations just disagree due to chance, and that better measurements or a tighter simulation grid would make them agree. Given their estimates, though, that’s unlikely. That takes us from the realm of theoretical uncertainty, and into uncertainty about the theoretical. The two calculations use very different approaches. The new calculation tries to compute things from first principles, using the Standard Model directly. The risk is that such a calculation needs to make assumptions, ignoring some effects that are too difficult to calculate, and one of those assumptions may be wrong. The older calculation is based more on experimental results, using different experiments to estimate effects that are hard to calculate but that should be similar between different situations. The risk is that the situations may be less similar than expected, their assumptions breaking down in a way that the bottom-up calculation could catch.

None of these risks are easy to estimate. They’re “unknown unknowns”, or rather, “uncertain uncertainties”. And until some of them are resolved, it won’t be clear whether Fermilab’s new measurement is a sign of undiscovered particles, or just a (challenging!) confirmation of the Standard Model.

Is Outreach for Everyone?

Betteridge’s law applies here: the answer is “no”. It’s a subtle “no”, though.

As a scientist, you will always need to be able to communicate your work. Most of the time you can get away with papers and talks aimed at your peers. But the longer you mean to stick around, the more often you will have to justify yourself to others: to departments, to universities, and to grant agencies. A scientist cannot survive on scientific ability alone: to get jobs, to get funding, to survive, you need to be able to promote yourself, at least a little.

Self-promotion isn’t outreach, though. Talking to the public, or to journalists, is a different skill from talking to other academics or writing grants. And it’s entirely possible to go through an entire scientific career without exercising that skill.

That’s a reassuring message for some. I’ve met people for whom science is a refuge from the mess of human interaction, people horrified by the thought of fame or even being mentioned in a newspaper. When I meet these people, they sometimes seem to worry that I’m silently judging them, thinking that they’re ignoring their responsibilities by avoiding outreach. They think this in part because the field seems to be going in that direction. Grants that used to focus just on science have added outreach as a requirement, demanding that each application come with a plan for some outreach project.

I can’t guarantee that more grants won’t add outreach requirements. But I can say at least that I’m on your side here: I don’t think you should have to do outreach if you don’t want to. I don’t think you have to, just yet. And I think if grant agencies are sensible, they’ll find a way to encourage outreach without making it mandatory.

I think that overall, collectively, we have a responsibility to do outreach. Beyond the old arguments about justifying ourselves to taxpayers, we also just ought to be open about what we do. In a world where people are actively curious about us, we ought to encourage and nurture that curiosity. I don’t think this is unique to science, I think it’s something every industry, every hobby, and every community should foster. But in each case, I think that communication should be done by people who want to do it, not forced on every member.

I also think that, potentially, anyone can do outreach. Outreach can take different forms for different people, anything from speaking to high school students to talking to journalists to writing answers for Stack Exchange. I don’t think anyone should feel afraid of outreach because they think they won’t be good enough. Chances are, you know something other people don’t: I guarantee if you want to, you will have something worth saying.

Redefining Fields for Fun and Profit

When we study subatomic particles, particle physicists use a theory called Quantum Field Theory. But what is a quantum field?

Some people will describe a field in vague terms, and say it’s like a fluid that fills all of space, or a vibrating rubber sheet. These are all metaphors, and while they can be helpful, they can also be confusing. So let me avoid metaphors, and say something that may be just as confusing: a field is the answer to a question.

Suppose you’re interested in a particle, like an electron. There is an electron field that tells you, at each point, your chance of detecting one of those particles spinning in a particular way. Suppose you’re trying to measure a force, say electricity or magnetism. There is an electromagnetic field that tells you, at each point, what force you will measure.

Sometimes the question you’re asking has a very simple answer: just a single number, for each point and each time. An example of a question like that is the temperature: pick a city, pick a date, and the temperature there and then is just a number. In particle physics, the Higgs field answers a question like that: at each point, and each time, how “Higgs-y” is it there and then? You might have heard that the Higgs field gives other particles their mass: what this means is that the more “Higgs-y” it is somewhere, the higher these particles’ mass will be. The Higgs field is almost constant, because it’s very difficult to get it to change. That’s in some sense what the Large Hadron Collider did when they discovered the Higgs boson: pushed hard enough to cause a tiny, short-lived ripple in the Higgs field, a small area that was briefly more “Higgs-y” than average.

We like to think of some fields as fundamental, and others as composite. A proton is composite: it’s made up of quarks and gluons. Quarks and gluons, as far as we know, are fundamental: they’re not made up of anything else. More generally, since we’re thinking about fields as answers to questions, we can just as well ask more complicated, “composite” questions. For example, instead of “what is the temperature?”, we can ask “what is the temperature squared?” or “what is the temperature times the Higgs-y-ness?”.

But this raises a troubling point. When we single out a specific field, like the Higgs field, why are we sure that that field is the fundamental one? Why didn’t we start with “Higgs squared” instead? Or “Higgs plus Higgs squared”? Or something even weirder?

The inventor of the Higgs-squared field, Peter Higgs-squared

That kind of swap, from Higgs to Higgs squared, is called a field redefinition. In the math of quantum field theory, it’s something you’re perfectly allowed to do. Sometimes, it’s even a good idea. Other times, it can make your life quite complicated.

The reason why is that some fields are much simpler than others. Some are what we call free fields. Free fields don’t interact with anything else. They just move, rippling along in easy-to-calculate waves.

Redefine a free field, swapping it for some more complicated function, and you can easily screw up, and make it into an interacting field. An interacting field might interact with another field, like how electromagnetic fields move (and are moved by) electrons. It might also just interact with itself, a kind of feedback effect that makes any calculation we’d like to do much more difficult.

If we persevere with this perverse choice, and do the calculation anyway, we find a surprise. The final results we calculate, the real measurements people can do, are the same in both theories. The field redefinition changed how the theory appeared, quite dramatically…but it didn’t change the physics.

You might think the moral of the story is that you must always choose the right fundamental field. You might want to, but you can’t: not every field is secretly free. Some will be interacting fields, whatever you do. In that case, you can make one choice or another to simplify your life…but you can also just refuse to make a choice.

That’s something quite a few physicists do. Instead of looking at a theory and calling some fields fundamental and others composite, they treat every one of these fields, every different question they could ask, on the same footing. They then ask, for these fields, what one can measure about them. They can ask which fields travel at the speed of light, and which ones go slower, or which fields interact with which other fields, and how much. Field redefinitions will shuffle the fields around, but the patterns in the measurements will remain. So those, and not the fields, can be used to specify the theory. Instead of describing the world in terms of a few fundamental fields, they think about the world as a kind of field soup, characterized by how it shifts when you stir it with a spoon.

It’s not a perspective everyone takes. If you overhear physicists, sometimes they will talk about a theory with only a few fields, sometimes they will talk about many, and you might be hard-pressed to tell what they’re talking about. But if you keep in mind these two perspectives: either a few fundamental fields, or a “field soup”, you’ll understand them a little better.

Black Holes, Neutron Stars, and the Power of Love

What’s the difference between a black hole and a neutron star?

When a massive star nears the end of its life, it starts running out of nuclear fuel. Without the support of a continuous explosion, the star begins to collapse, crushed under its own weight.

What happens then depends on how much weight that is. The most massive stars collapse completely, into the densest form anything can take: a black hole. Einstein’s equations say a black hole is a single point, infinitely dense: get close enough and nothing, not even light, can escape. A quantum theory of gravity would change this, but not a lot: a quantum black hole would still be as dense as quantum matter can get, still equipped with a similar “point of no return”.

A slightly less massive star collapses, not to a black hole, but to a neutron star. Matter in a neutron star doesn’t collapse to a single point, but it does change dramatically. Each electron in the old star is crushed together with a proton until it becomes a neutron, a forced reversal of the more familiar process of Beta decay. Instead of a ball of hydrogen and helium, the star then ends up like a single atomic nucleus, one roughly the size of a city.

Not kidding about the “city” thing…and remember, this is more massive than the Sun

Now, let me ask a slightly different question: how do you tell the difference between a black hole and a neutron star?

Sometimes, you can tell this through ordinary astronomy. Neutron stars do emit light, unlike black holes, though for most neutron stars this is hard to detect. In the past, astronomers would use other objects instead, looking at light from matter falling in, orbiting, or passing by a black hole or neutron star to estimate its mass and size.

Now they have another tool: gravitational wave telescopes. Maybe you’ve heard of LIGO, or its European cousin Virgo: massive machines that do astronomy not with light but by detecting ripples in space and time. In the future, these will be joined by an even bigger setup in space, called LISA. When two black holes or neutron stars collide they “ring” the fabric of space and time like a bell, sending out waves in every direction. By analyzing the frequency of these waves, scientists can learn something about what made them: in particular, whether the waves were made by black holes or neutron stars.

One big difference between black holes and neutron stars lies in something called their “Love numbers“. From far enough away, you can pretend both black holes and neutron stars are single points, like fundamental particles. Try to get more precise, and this picture starts to fail, but if you’re smart you can include small corrections and keep things working. Some of those corrections, called Love numbers, measure how much one object gets squeezed and stretched by the other’s gravitational field. They’re called Love numbers not because they measure how hug-able a neutron star is, but after the mathematician who first proposed them, A. E. H. Love.

What can we learn from Love numbers? Quite a lot. More impressively, there are several different types of questions Love numbers can answer. There are questions about our theories, questions about the natural world, and questions about fundamental physics.

You might have heard that black holes “have no hair”. A black hole in space can be described by just two numbers: its mass, and how much it spins. A star is much more complicated, with sunspots and solar flares and layers of different gases in different amounts. For a black hole, all of that is compressed down to nothing, reduced to just those two numbers and nothing else.

With that in mind, you might think a black hole should have zero Love numbers: it should be impossible to squeeze it or stretch it. This is fundamentally a question about a theory, Einstein’s theory of relativity. If we took that theory for granted, and didn’t add anything to it, what would the consequences be? Would black holes have zero Love number, or not?

It turns out black holes do have zero Love number, if they aren’t spinning. If they are, things are more complicated: a few calculations made it look like spinning black holes also had zero Love number, but just last year a more detailed proof showed that this doesn’t hold. Somehow, despite having “no hair”, you can actually “squeeze” a spinning black hole.

(EDIT: Folks on twitter pointed out a wrinkle here: more recent papers are arguing that spinning black holes actually do have zero Love number as well, and that the earlier papers confused Love numbers with a different effect. All that is to say this is still very much an active area of research!)

The physics behind neutron stars is in principle known, but in practice hard to understand. When they are formed, almost every type of physics gets involved: gas and dust, neutrino blasts, nuclear physics, and general relativity holding it all together.

Because of all this complexity, the structure of neutron stars can’t be calculated from “first principles” alone. Finding it out isn’t a question about our theories, but a question about the natural world. We need to go out and measure how neutron stars actually behave.

Love numbers are a promising way to do that. Love numbers tell you how an object gets squeezed and stretched in a gravitational field. Learning the Love numbers of neutron stars will tell us something about their structure: namely, how squeezable and stretchable they are. Already, LIGO and Virgo have given us some information about this, and ruled out a few possibilities. In future, the LISA telescope will show much more.

Returning to black holes, you might wonder what happens if we don’t stick to Einstein’s theory of relativity. Physicists expect that relativity has to be modified to account for quantum effects, to make a true theory of quantum gravity. We don’t quite know how to do that yet, but there are a few proposals on the table.

Asking for the true theory of quantum gravity isn’t just a question about some specific part of the natural world, it’s a question about the fundamental laws of physics. Can Love numbers help us answer it?

Maybe. Some theorists think that quantum gravity will change the Love numbers of black holes. Fewer, but still some, think they will change enough to be detectable, with future gravitational wave telescopes like LISA. I get the impression this is controversial, both because of the different proposals involved and the approximations used to understand them. Still, it’s fun that Love numbers can answer so many different types of questions, and teach us so many different things about physics.

Unrelated: For those curious about what I look/sound like, I recently gave a talk of outreach advice for the Max Planck Institute for Physics, and they posted it online here.

“Inreach”

This is, first and foremost, an outreach blog. I try to make my writing as accessible as possible, so that anyone from high school students to my grandparents can learn something. My goal is to get the general public to know a bit more about physics, and about the people who do it, both to better understand the world and to view us in a better light.

However, as I am occasionally reminded, my readers aren’t exactly the general public. I’ve done polls, and over 60% of you either have a PhD in physics, or are on your way to one. The rest include people with what one might call an unusually strong interest in physics: engineers with a fondness for the (2,0) theory, or retired lawyers who like to debate dark matter.

With that in mind, am I really doing outreach? Or am I doing some sort of “inreach” instead?

First, it’s important to remember that just because someone is a physicist doesn’t mean they’re an expert in everything. This is especially relevant when I talk about my own sub-field, but it matters for other topics too: experts in one part of physics can still find something to learn, and it’s still worth getting on their good side. Still, if that was my main audience, I’d probably want to strike a different tone, more like the colloquium talks we give for our fellow physicists.

Second, I like to think that outreach “trickles down”. I write for a general audience, and get read by “physics fans”, but they will go on to talk about physics to anyone who will listen: to parents who want to understand what they do, to people they’re trying to impress at parties, to friends they share articles with. If I write good metaphors and clear analogies, they will get passed on to those friends and parents, and the “inreach” will become outreach. I know that’s why I read other physicists’ outreach blogs: I’m looking for new tricks to make ideas clearer.

Third, active readers are not all readers. The people who answer a poll are more likely to be regulars, people who come back to the blog again and again, and those people are pretty obviously interested in physics. (Interested doesn’t mean expert, of course…but in practice, far more non-experts read blogs on, say, military history, than on physics.) But I suspect most of my readers aren’t regulars. My most popular post, “The Way You Think Everything Is Connected Isn’t the Way Everything Is Connected”, gets a trickle of new views every day. WordPress lets me see some of the search terms people use to find it, and there are people who literally google “is everything connected?” These aren’t physics PhDs looking for content, these are members of the general public who hear something strange and confusing and want to check it out. Being that check, the source someone googles to clear things up, that’s an honor. Knowing I’m serving that role, I know I’m not doing “just inreach”: I’m reaching out too.

Reality as an Algebra of Observables

Listen to a physicist talk about quantum mechanics, and you’ll hear the word “observable”. Observables are, intuitively enough, things that can be observed. They’re properties that, in principle, one could measure in an experiment, like the position of a particle or its momentum. They’re the kinds of things linked by uncertainty principles, where the better you know one, the worse you know the other.

Some physicists get frustrated by this focus on measurements alone. They think we ought to treat quantum mechanics, not like a black box that produces results, but as information about some underlying reality. Instead of just observables, they want us to look for “beables“: not just things that can be observed, but things that something can be. From their perspective, the way other physicists focus on observables feels like giving up, like those physicists are abandoning their sacred duty to understand the world. Others, like the Quantum Bayesians or QBists, disagree, arguing that quantum mechanics really is, and ought to be, a theory of how individuals get evidence about the world.

I’m not really going to weigh in on that debate, I still don’t feel like I know enough to even write a decent summary. But I do think that one of the instincts on the “beables” side is wrong. If we focus on observables in quantum mechanics, I don’t think we’re doing anything all that unusual. Even in other parts of physics, we can think about reality purely in terms of observations. Doing so isn’t a dereliction of duty: often, it’s the most useful way to understand the world.

When we try to comprehend the world, we always start alone. From our time in the womb, we have only our senses and emotions to go on. With a combination of instinct and inference we start assembling a consistent picture of reality. Philosophers called phenomenologists (not to be confused with the physicists called phenomenologists) study this process in detail, trying to characterize how different things present themselves to an individual consciousness.

For my point here, these details don’t matter so much. That’s because in practice, we aren’t alone in understanding the world. Based on what others say about the world, we conclude they perceive much like we do, and we learn by their observations just as we learn by our own. We can make things abstract: instead of the specifics of how individuals perceive, we think about groups of scientists making measurements. At the end of this train lie observables: things that we as a community could in principle learn, and share with each other, ignoring the details of how exactly we measure them.

If each of these observables was unrelated, just scattered points of data, then we couldn’t learn much. Luckily, they are related. In quantum mechanics, some of these relationships are the uncertainty principles I mentioned earlier. Others relate measurements at different places, or at different times. The fancy way to refer to all these relationships is as an algebra: loosely, it’s something you can “do algebra with”, like you did with numbers and variables in high school. When physicists and mathematicians want to do quantum mechanics or quantum field theory seriously, they often talk about an “algebra of observables”, a formal way of thinking about all of these relationships.

Focusing on those two things, observables and how they are related, isn’t just useful in the quantum world. It’s an important way to think in other areas of physics too. If you’ve heard people talk about relativity, the focus on measurement screams out, in thought experiments full of abstract clocks and abstract yardsticks. Without this discipline, you find paradoxes, only to resolve them when you carefully track what each person can observe. More recently, physicists in my field have had success computing the chance particles collide by focusing on the end result, the actual measurements people can make, ignoring what might happen in between to cause that measurement. We can then break measurements down into simpler measurements, or use the structure of simpler measurements to guess more complicated ones. While we typically have done this in quantum theories, that’s not really a limitation: the same techniques make sense for problems in classical physics, like computing the gravitational waves emitted by colliding black holes.

With this in mind, we really can think of reality in those terms: not as a set of beable objects, but as a set of observable facts, linked together in an algebra of observables. Paring things down to what we can know in this way is more honest, and it’s also more powerful and useful. Far from a betrayal of physics, it’s the best advantage we physicists have in our quest to understand the world.

Poll: How Do You Get Here?

I’ve been digging through the WordPress “stats” page for this blog. One thing WordPress tells me is what links people follow to get here. It tells me how many times people come from Google or Facebook or Twitter, and how many come from seeing a link on another blog. One thing that surprised me is that some of the blogs people come here from haven’t updated in years.

The way I see it there are two possible explanations. It could be that new people keep checking the old blogs, see a link on their blogroll, and come on over here to check it out. But it could also be the same people over and over, who find it more convenient to start on an old blog and click on links from there.

WordPress doesn’t tell me the difference. But I realized, I can just ask. So in this post, I’m asking all my readers to tell me how you get here. I’m not asking how you found this blog to begin with, but rather how, on a typical day, you navigate to the site. Do you subscribe by email? Do you google the blog’s name every time? RSS reader? Let me know below! And if you don’t see an option that fits you, let me know in the comments!

The Grant-Writing Moment

When a scientist applies for a grant to fund their research, there’s a way it’s supposed to go. The scientist starts out with a clear idea, a detailed plan for an experiment or calculation they’d like to do, and an expectation of what they could learn from it. Then they get the grant, do their experiment or calculation, and make their discovery. The world smiles upon them.

There’s also a famous way it actually goes. Like the other way, the scientist has a clear idea and detailed plan. Then they do their experiment, or calculation, and see what they get, making their discovery. Finally, they write their grant application, proposing to do the experiment they already did. Getting the grant, they then spend the money on their next idea instead, which they will propose only in the next grant application, and so on.

This is pretty shady behavior. But there’s yet another way things can go, one that flips the previous method on its head. And after considering it, you might find the shady method more understandable.

What happens if a scientist is going to run out of funding, but doesn’t yet have a clear idea? Maybe they don’t know enough yet to have a detailed plan for their experiment or their calculation. Maybe they have an idea, but they’re still foggy about what they can learn from it.

Well, they’re still running out of funding. They still have to write that grant. So they start writing. Along the way, they’ll manage to find some of that clarity: they’ll have to write a detailed plan, they’ll have to describe some expected discovery. If all goes well, they tell a plausible story, and they get that funding.

When they actually go do that research, though, there’s no guarantee it sticks to the plan. In fact, it’s almost guaranteed not to: neither the scientist nor the grant committee typically knows what experiment or calculation needs to be done: that’s what makes the proposal novel science in the first place. The result is that once again, the grant proposal wasn’t exactly honest: it didn’t really describe what was actually going to be done.

You can think of these different stories as falling on a sliding scale. On the one end, the scientist may just have the first glimmer of an idea, and their funded research won’t look anything like their application. On the other, the scientist has already done the research, and the funded research again looks nothing like the application. In between there’s a sweet spot, the intended system: late enough that the scientist has a good idea of what they need to do, early enough that they haven’t done it yet.

How big that sweet spot is depends on the pace of the field. If you’re a field with big, complicated experiments, like randomized controlled trials, you can mostly make this work. Your work takes a long time to plan, and requires sticking to that plan, so you can, at least sometimes, do grants “the right way”. The smaller your experiments are though, the more the details can change, and the smaller the window gets. For a field like theoretical physics, if you know exactly what calculation to do, or what proof to write, with no worries or uncertainty…well, you’ve basically done the calculation already. The sweet spot for ethical grant-writing shrinks down to almost a single moment.

In practice, some grant committees understand this. There are grants where you are expected to present preliminary evidence from work you’ve already started, and to discuss the risks your vaguer ideas might face. Grants of this kind recognize that science is a process, and that catching people at that perfect moment is next-to-impossible. They try to assess what the scientist is doing as a whole, not just a single idea.

Scientists ought to be honest about what they’re doing. But grant agencies need to be honest too, about how science in a given field actually works. Hopefully, one enables the other, and we reach a more honest world.

Valentine’s Day Physics Poem 2021

It’s Valentine’s Day this weekend, so time for another physics poem. If you’d like to read the poems from past years, they’re archived with the tag Valentine’s Day Physics Poem, accessible here.

Passion Project

Passion is passion.
  
If you find yourself writing letter after letter,
be they “love”,
or “Physical Review”
  
Or if you are the quiet sort
and notice only in your mind
those questions, time after time
whenever silence reigns:
“how do I make things right?”
  
If you look ahead
and your branching,
             uncertain, 
                   futures,
each so different
still have one
               thing
                      in common.
  
If you could share that desert island, that jail cell,
and count yourself free.
  
You’ve found your star. Now it’s straight on till morning.

A Tale of Two Donuts

I’ve got a new paper up this week, with Hjalte Frellesvig, Cristian Vergu, and Matthias Volk, about the elliptic integrals that show up in Feynman diagrams.

You can think of elliptic integrals as integrals over a torus, a curve shaped like the outer crust of a donut.

Do you prefer your integrals glazed, or with powdered sugar?

Integrals like these are showing up more and more in our field, the subject of bigger and bigger conferences. By now, we think we have a pretty good idea of how to handle them, but there are still some outstanding mysteries to solve.

One such mystery came up in a paper in 2017, by Luise Adams and Stefan Weinzierl. They were working with one of the favorite examples of this community, the so-called sunrise diagram (sunrise being a good time to eat donuts). And they noticed something surprising: if they looked at the sunrise diagram in different ways, it was described by different donuts.

What do I mean, different donuts?

The integrals we know best in this field aren’t integrals on a torus, but rather integrals on a sphere. In some sense, all spheres are the same: you can make them bigger or smaller, but they don’t have different shapes, they’re all “sphere-shaped”. In contrast, integrals on a torus are trickier, because toruses can have different shapes. Think about different donuts: some might have a thin ring, others a thicker one, even if the overall donut is the same size. You can’t just scale up one donut and get the other.

This donut even has a marked point

My colleague, Cristian Vergu, was annoyed by this. He’s the kind of person who trusts mathematics like an old friend, one who would never lead him astray. He thought that there must be one answer, one correct donut, one natural way to represent the sunrise diagram mathematically. I was skeptical, I don’t trust mathematics nearly as much as Cristian does. To sort it out, we brought in Hjalte Frellesvig and Matthias Volk, and started trying to write the sunrise diagram every way we possibly could. (Along the way, we threw in another “donut diagram”, the double-box, just to see what would happen.)

Rather than getting a zoo of different donuts, we got a surprise: we kept seeing the same two. And in the end, we stumbled upon the answer Cristian was hoping for: one of these two is, in a meaningful sense, the “correct donut”.

What was wrong with the other donut? It turns out when the original two donuts were found, one of them involved a move that is a bit risky mathematically, namely, combining square roots.

For readers who don’t know what I mean, or why this is risky, let me give a simple example. Everyone else can skip to after the torus gif.

Suppose I am solving a problem, and I find a product of two square roots:

\sqrt{x}\sqrt{x}

I could try combining them under the same square root sign, like so:

\sqrt{x^2}

That works, if x is positive. But now suppose x=-1. Plug in negative one to the first expression, and you get,

\sqrt{-1}\sqrt{-1}=i\times i=-1

while in the second,

\sqrt{(-1)^2}=\sqrt{1}=1

Torus transforming, please stand by

In this case, it wasn’t as obvious that combining roots would change the donut. It might have been perfectly safe. It took some work to show that indeed, this was the root of the problem. If the roots are instead combined more carefully, then one of the donuts goes away, leaving only the one, true donut.

I’m interested in seeing where this goes, how many different donuts we have to understand and how they might be related. But I’ve also been writing about donuts for the last hour or so, so I’m getting hungry. See you next week!