Tag Archives: quantum field theory

A Tale of Two Donuts

I’ve got a new paper up this week, with Hjalte Frellesvig, Cristian Vergu, and Matthias Volk, about the elliptic integrals that show up in Feynman diagrams.

You can think of elliptic integrals as integrals over a torus, a curve shaped like the outer crust of a donut.

Do you prefer your integrals glazed, or with powdered sugar?

Integrals like these are showing up more and more in our field, the subject of bigger and bigger conferences. By now, we think we have a pretty good idea of how to handle them, but there are still some outstanding mysteries to solve.

One such mystery came up in a paper in 2017, by Luise Adams and Stefan Weinzierl. They were working with one of the favorite examples of this community, the so-called sunrise diagram (sunrise being a good time to eat donuts). And they noticed something surprising: if they looked at the sunrise diagram in different ways, it was described by different donuts.

What do I mean, different donuts?

The integrals we know best in this field aren’t integrals on a torus, but rather integrals on a sphere. In some sense, all spheres are the same: you can make them bigger or smaller, but they don’t have different shapes, they’re all “sphere-shaped”. In contrast, integrals on a torus are trickier, because toruses can have different shapes. Think about different donuts: some might have a thin ring, others a thicker one, even if the overall donut is the same size. You can’t just scale up one donut and get the other.

This donut even has a marked point

My colleague, Cristian Vergu, was annoyed by this. He’s the kind of person who trusts mathematics like an old friend, one who would never lead him astray. He thought that there must be one answer, one correct donut, one natural way to represent the sunrise diagram mathematically. I was skeptical, I don’t trust mathematics nearly as much as Cristian does. To sort it out, we brought in Hjalte Frellesvig and Matthias Volk, and started trying to write the sunrise diagram every way we possibly could. (Along the way, we threw in another “donut diagram”, the double-box, just to see what would happen.)

Rather than getting a zoo of different donuts, we got a surprise: we kept seeing the same two. And in the end, we stumbled upon the answer Cristian was hoping for: one of these two is, in a meaningful sense, the “correct donut”.

What was wrong with the other donut? It turns out when the original two donuts were found, one of them involved a move that is a bit risky mathematically, namely, combining square roots.

For readers who don’t know what I mean, or why this is risky, let me give a simple example. Everyone else can skip to after the torus gif.

Suppose I am solving a problem, and I find a product of two square roots:

\sqrt{x}\sqrt{x}

I could try combining them under the same square root sign, like so:

\sqrt{x^2}

That works, if x is positive. But now suppose x=-1. Plug in negative one to the first expression, and you get,

\sqrt{-1}\sqrt{-1}=i\times i=-1

while in the second,

\sqrt{(-1)^2}=\sqrt{1}=1

Torus transforming, please stand by

In this case, it wasn’t as obvious that combining roots would change the donut. It might have been perfectly safe. It took some work to show that indeed, this was the root of the problem. If the roots are instead combined more carefully, then one of the donuts goes away, leaving only the one, true donut.

I’m interested in seeing where this goes, how many different donuts we have to understand and how they might be related. But I’ve also been writing about donuts for the last hour or so, so I’m getting hungry. See you next week!

Physical Intuition From Physics Experience

One of the most mysterious powers physicists claim is physical intuition. Let the mathematicians have their rigorous proofs and careful calculations. We just need to ask ourselves, “Does this make sense physically?”

It’s tempting to chalk this up to bluster, or physicist arrogance. Sometimes, though, a physicist manages to figure out something that stumps the mathematicians. Edward Witten’s work on knot theory is a classic example, where he used ideas from physics, not rigorous proof, to win one of mathematics’ highest honors.

So what is physical intuition? And what is its relationship to proof?

Let me walk you through an example. I recently saw a talk by someone in my field who might be a master of physical intuition. He was trying to learn about what we call Effective Field Theories, theories that are “effectively” true at some energy but don’t include the details of higher-energy particles. He calculated that there are limits to the effect these higher-energy particles can have, just based on simple cause and effect. To explain the calculation to us, he gave a physical example, of coupled oscillators.

Oscillators are familiar problems for first-year physics students. Objects that go back and forth, like springs and pendulums, tend to obey similar equations. Link two of them together (couple them), and the equations get more complicated, work for a second-year student instead of a first-year one. Such a student will notice that coupled oscillators “repel” each other: their frequencies get father apart than they would be if they weren’t coupled.

Our seminar speaker wanted us to revisit those second-year-student days, in order to understand how different particles behave in Effective Field Theory. Just as the frequencies of the oscillators repel each other, the energies of particles repel each other: the unknown high-energy particles could only push the energies of the lighter particles we can detect lower, not higher.

This is an example of physical intuition. Examine it, and you can learn a few things about how physical intuition works.

First, physical intuition comes from experience. Using physical intuition wasn’t just a matter of imagining the particles and trying to see what “makes sense”. Instead, it required thinking about similar problems from our experience as physicists: problems that don’t just seem similar on the surface, but are mathematically similar.

Second, physical intuition doesn’t replace calculation. Our speaker had done the math, he hadn’t just made a physical argument. Instead, physical intuition serves two roles: to inspire, and to help remember. Physical intuition can inspire new solutions, suggesting ideas that you go on to check with calculation. In addition to that, it can help your mind sort out what you already know. Without the physical story, we might not have remembered that the low-energy particles have their energies pushed down. With the story though, we had a similar problem to compare, and it made the whole thing more memorable. Human minds aren’t good at holding a giant pile of facts. What they are good at is holding narratives. “Physical intuition” ties what we know into a narrative, building on past problems to understand new ones.

Finally, physical intuition can be risky. If the problem is too different then the intuition can lead you astray. The mathematics of coupled oscillators and Effective Field Theories was similar enough for this argument to work, but if it turned out to be different in an important way then the intuition would have backfired, making it harder to find the answer and harder to keep track once it was found.

Physical intuition may seem mysterious. But deep down, it’s just physicists using our experience, comparing similar problems to help keep track of what we need to know. I’m sure chemists, biologists, and mathematicians all have similar stories to tell.

Inevitably Arbitrary

Physics is universal…or at least, it aspires to be. Drop an apple anywhere on Earth, at any point in history, and it will accelerate at roughly the same rate. When we call something a law of physics, we expect it to hold everywhere in the universe. It shouldn’t depend on anything arbitrary.

Sometimes, though, something arbitrary manages to sneak in. Even if the laws of physics are universal, the questions we want to answer are not: they depend on our situation, on what we want to know.

The simplest example is when we have to use units. The mass of an electron is the same here as it is on Alpha Centauri, the same now as it was when the first galaxies formed. But what is that mass? We could write it as 9.1093837015×10−31 kilograms, if we wanted to, but kilograms aren’t exactly universal. Their modern definition is at least based on physical constants, but with some pretty arbitrary numbers. It defines the Planck constant as 6.62607015×10−34 Joule-seconds. Chase that number back, and you’ll find references to the Earth’s circumference and the time it takes to turn round on its axis. The mass of the electron may be the same on Alpha Centauri, but they’d never write it as 9.1093837015×10−31 kilograms.

Units aren’t the only time physics includes something arbitrary. Sometimes, like with units, we make a choice of how we measure or calculate something. We choose coordinates for a plot, a reference frame for relativity, a zero for potential energy, a gauge for gauge theories and regularization and subtraction schemes for quantum field theory. Sometimes, the choice we make is instead what we measure. To do thermodynamics we must choose what we mean by a state, to call two substances water even if their atoms are in different places. Some argue a perspective like this is the best way to think about quantum mechanics. In a different context, I’d argue it’s why we say coupling constants vary with energy.

So what do we do, when something arbitrary sneaks in? We have a few options. I’ll illustrate each with the mass of the electron:

  • Make an arbitrary choice, and stick with it: There’s nothing wrong with measuring an electron in kilograms, if you’re consistent about it. You could even use ounces. You just have to make sure that everyone else you compare with is using the same units, or be careful to convert.
  • Make a “natural” choice: Why not set the speed of light and Planck’s constant to one? They come up a lot in particle physics, and all they do is convert between length and time, or time and energy. That way you can use the same units for all of them, and use something convenient, like electron-Volts. They even have electron in the name! Of course they also have “Volt” in the name, and Volts are as arbitrary as any other metric unit. A “natural” choice might make your life easier, but you should always remember it’s still arbitrary.
  • Make an efficient choice: This isn’t always the same as the “natural” choice. The units you choose have an effect on how difficult your calculation is. Sometimes, the best choice for the mass of an electron is “one electron-mass”, because it lets you calculate something else more easily. This is easier to illustrate with other choices: for example, if you have to pick a reference frame for a collision, picking one in which one of the objects is at rest, or where they move symmetrically, might make your job easier.
  • Stick to questions that aren’t arbitrary: No matter what units we use, the electron’s mass will be arbitrary. Its ratios to other masses won’t be though. No matter where we measure, dimensionless ratios like the mass of the muon divided by the mass of the electron, or the mass of the electron divided by the value of the Higgs field, will be the same. If we can make sure to ask only this kind of question, we can avoid arbitrariness. Note that we can think of even a mass in “kilograms” as this kind of question: what’s the ratio of the mass of the electron to “this arbitrary thing we’ve chosen”? In practice though, you want to compare things in the same theory, without the historical baggage of metric.

This problem may seem silly, and if we just cared about units it might be. But at the cutting-edge of physics there are still areas where the arbitrary shows up. Our choices of how to handle it, or how to avoid it, can be crucial to further progress.

QCD Meets Gravity 2020, Retrospective

I was at a Zoomference last week, called QCD Meets Gravity, about the many ways gravity can be thought of as the “square” of other fundamental forces. I didn’t have time to write much about the actual content of the conference, so I figured I’d say a bit more this week.

A big theme of this conference, as in the past few years, was gravitational waves. From LIGO’s first announcement of a successful detection, amplitudeologists have been developing new methods to make predictions for gravitational waves more efficient. It’s a field I’ve dabbled in a bit myself. Last year’s QCD Meets Gravity left me impressed by how much progress had been made, with amplitudeologists already solidly part of the conversation and able to produce competitive results. This year felt like another milestone, in that the amplitudeologists weren’t just catching up with other gravitational wave researchers on the same kinds of problems. Instead, they found new questions that amplitudes are especially well-suited to answer. These included combining two pieces of these calculations (“potential” and “radiation”) that the older community typically has to calculate separately, using an old quantum field theory trick, finding the gravitational wave directly from amplitudes, and finding a few nice calculations that can be used to “generate” the rest.

A large chunk of the talks focused on different “squaring” tricks (or as we actually call them, double-copies). There were double-copies for cosmology and conformal field theory, for the celestial sphere, and even some version of M theory. There were new perspectives on the double-copy, new building blocks and algebraic structures that lie behind it. There were talks on the so-called classical double-copy for space-times, where there have been some strange discoveries (an extra dimension made an appearance) but also a more rigorous picture of where the whole thing comes from, using twistor space. There were not one, but two talks linking the double-copy to the Navier-Stokes equation describing fluids, from two different groups. (I’m really curious whether these perspectives are actually useful for practical calculations about fluids, or just fun to think about.) Finally, while there wasn’t a talk scheduled on this paper, the authors were roped in by popular demand to talk about their work. They claim to have made progress on a longstanding puzzle, how to show that double-copy works at the level of the Lagrangian, and the community was eager to dig into the details.

From there, a grab-bag of talks covered other advancements. There were talks from string theorists and ambitwistor string theorists, from Effective Field Theorists working on gravity and the Standard Model, from calculations in N=4 super Yang-Mills, QCD, and scalar theories. Simon Caron-Huot delved into how causality constrains the theories we can write down, showing an interesting case where the common assumption that all parameters are close to one is actually justified. Nima Arkani-Hamed began his talk by saying he’d surprise us, which he certainly did (and not by keeping on time). It’s tricky to explain why his talk was exciting. Comparing to his earlier discovery of the Amplituhedron, which worked for a toy model, this is a toy calculation in a toy model. While the Amplituhedron wasn’t based on Feynman diagrams, this can’t even be compared with Feynman diagrams. Instead of expanding in a small coupling constant, this expands in a parameter that by all rights should be equal to one. And instead of positivity conditions, there are negativity conditions. All I can say is that with all of that in mind, it looks like real progress on an important and difficult problem from a totally unanticipated direction. In a speech summing up the conference, Zvi Bern mentioned a few exciting words from Nima’s talk: “nonplanar”, “integrated”, “nonperturbative”. I’d add “differential equations” and “infinite sums of ladder diagrams”. Nima and collaborators are trying to figure out what happens when you sum up all of the Feynman diagrams in a theory. I’ve made progress in the past for diagrams with one “direction”, a ladder that grows as you add more loops, but I didn’t know how to add “another direction” to the ladder. In very rough terms, Nima and collaborators figured out how to add that direction.

I’ve probably left things out here, it was a packed conference! It’s been really fun seeing what the community has cooked up, and I can’t wait to see what happens next.

QCD Meets Gravity 2020

I’m at another Zoom conference this week, QCD Meets Gravity. This year it’s hosted by Northwestern.

The view of the campus from wonder.me

QCD Meets Gravity is a conference series focused on the often-surprising links between quantum chromodynamics on the one hand and gravity on the other. By thinking of gravity as the “square” of forces like the strong nuclear force, researchers have unlocked new calculation techniques and deep insights.

Last year’s conference was very focused on one particular topic, trying to predict the gravitational waves observed by LIGO and VIRGO. That’s still a core topic of the conference, but it feels like there is a bit more diversity in topics this year. We’ve seen a variety of talks on different “squares”: new theories that square to other theories, and new calculations that benefit from “squaring” (even surprising applications to the Navier-Stokes equation!) There are talks on subjects from String Theory to Effective Field Theory, and even a talk on a very different way that “QCD meets gravity”, in collisions of neutron stars.

With still a few more talks to go, expect me to say a bit more next week, probably discussing a few in more detail. (Several people presented exciting work in progress!) Until then, I should get back to watching!

What You Don’t Know, You Can Parametrize

In physics, what you don’t know can absolutely hurt you. If you ignore that planets have their own gravity, or that metals conduct electricity, you’re going to calculate a lot of nonsense. At the same time, as physicists we can’t possibly know everything. Our experiments are never perfect, our math never includes all the details, and even our famous Standard Model is almost certainly not the whole story. Luckily, we have another option: instead of ignoring what we don’t know, we can parametrize it, and estimate its effect.

Estimating the unknown is something we physicists have done since Newton. You might think Newton’s big discovery was the inverse-square law for gravity, but others at the time, like Robert Hooke, had also been thinking along those lines. Newton’s big discovery was that gravity was universal: that you need to know the effect of gravity, not just from the sun, but from all the other planets as well. The trouble was, Newton didn’t know how to calculate the motion of all of the planets at once (in hindsight, we know he couldn’t have). Instead, he estimated, using what he knew to guess how big the effect of what he didn’t would be. It was the accuracy of those guesses, not just the inverse square law by itself, that convinced the world that Newton was right.

If you’ve studied electricity and magnetism, you get to the point where you can do simple calculations with a few charges in your sleep. The world doesn’t have just a few charges, though: it has many charges, protons and electrons in every atom of every object. If you had to keep all of them in your calculations you’d never pass freshman physics, but luckily you can once again parametrize what you don’t know. Often you can hide those charges away, summarizing their effects with just a few numbers. Other times, you can treat materials as boundaries, and summarize everything beyond in terms of what happens on the edge. The equations of the theory let you do this, but this isn’t true for every theory: for the Navier-Stokes equation, which we use to describe fluids, it still isn’t known whether you can do this kind of trick.

Parametrizing what we don’t know isn’t just a trick for college physics, it’s key to the cutting edge as well. Right now we have a picture for how all of particle physics works, called the Standard Model, but we know that picture is incomplete. There are a million different theories you could write to go beyond the Standard Model, with a million different implications. Instead of having to use all those theories, physicists can summarize them all with what we call an effective theory: one that keeps track of the effect of all that new physics on the particles we already know. By summarizing those effects with a few parameters, we can see what they would have to be to be compatible with experimental results, ruling out some possibilities and suggesting others.

In a world where we never know everything, there’s always something that can hurt us. But if we’re careful and estimate what we don’t know, if we write down numbers and parameters and keep our options open, we can keep from getting burned. By focusing on what we do know, we can still manage to understand the world.

At “Antidifferentiation and the Calculation of Feynman Amplitudes”

I was at a conference this week, called Antidifferentiation and the Calculation of Feynman Amplitudes. The conference is a hybrid kind of affair: I attended via Zoom, but there were seven or so people actually there in the room (the room in question being at DESY Zeuthen, near Berlin).

The road to this conference was a bit of a roller-coaster. It was originally scheduled for early March. When the organizers told us they were postponing it, they seemed at the time a little overcautious…until the world proved me, and all of us, wrong. They rescheduled for October, and as more European countries got their infection rates down it looked like the conference could actually happen. We booked rooms at the DESY guest house, until it turned out they needed the space to keep the DESY staff socially distanced, and we quickly switched to booking at a nearby hotel.

Then Europe’s second wave hit. Cases in Denmark started to rise, so Germany imposed a quarantine on entry from Copenhagen and I switched to remote participation. Most of the rest of the participants did too, even several in Germany. For the few still there in person they have a variety of measures to stop infection, from fixed seats in the conference room to gloves for the coffee machine.

The content has been interesting. It’s an eclectic mix of review talks and talks on recent research, all focused on different ways to integrate (or, as one of the organizers emphasized, antidifferentiate) functions in quantum field theory. I’ve learned about the history of the field, and gotten a better feeling for the bottlenecks in some LHC-relevant calculations.

This week was also the announcement of the Physics Nobel Prize. I’ll do my traditional post on it next week, but for now, congratulations to Penrose, Genzel, and Ghez!

The Multiverse You Can Visit Is Not the True Multiverse

I don’t want to be the kind of science blogger who constantly complains about science fiction, but sometimes I can’t help myself.

When I blogged about zero-point energy a few weeks back, there was a particular book that set me off. Ian McDonald’s River of Gods depicts the interactions of human and AI agents in a fragmented 2047 India. One subplot deals with a power company pursuing zero-point energy, using an imagined completion of M theory called M* theory. This post contains spoilers for that subplot.

What frustrated me about River of Gods is that the physics in it almost makes sense. It isn’t just an excuse for magic, or a standard set of tropes. Even the name “M* theory” is extremely plausible, the sort of term that could get used for technical reasons in a few papers and get accidentally stuck as the name of our fundamental theory of nature. But because so much of the presentation makes sense, it’s actively frustrating when it doesn’t.

The problem is the role the landscape of M* theory plays in the story. The string theory (or M theory) landscape is the space of all consistent vacua, a list of every consistent “default” state the world could have. In the story, one of the AIs is trying to make a portal to somewhere else in the landscape, a world of pure code where AIs can live in peace without competing with humans.

The problem is that the landscape is not actually a real place in string theory. It’s a metaphorical mathematical space, a list organized by some handy coordinates. The other vacua, the other “default states”, aren’t places you can travel to, there just other ways the world could have been.

Ok, but what about the multiverse?

There are physicists out there who like to talk about multiple worlds. Some think they’re hypothetical, others argue they must exist. Sometimes they’ll talk about the string theory landscape. But to get a multiverse out of the string theory landscape, you need something else as well.

Two options for that “something else” exist. One is called eternal inflation, the other is the many-worlds interpretation of quantum mechanics. And neither lets you travel around the multiverse.

In eternal inflation, the universe is expanding faster and faster. It’s expanding so fast that, in most places, there isn’t enough time for anything complicated to form. Occasionally, though, due to quantum randomness, a small part of the universe expands a bit more slowly: slow enough for stars, planets, and maybe life. Each small part like that is its own little “Big Bang”, potentially with a different “default” state, a different vacuum from the string landscape. If eternal inflation is true then you can get multiple worlds, but they’re very far apart, and getting farther every second: not easy to visit.

The many-worlds interpretation is a way to think about quantum mechanics. One way to think about quantum mechanics is to say that quantum states are undetermined until you measure them: a particle could be spinning left or right, Schrödinger’s cat could be alive or dead, and only when measured is their state certain. The many-worlds interpretation offers a different way: by doing away with measurement, it instead keeps the universe in the initial “undetermined” state. The universe only looks determined to us because of our place in it: our states become entangled with those of particles and cats, so that our experiences only correspond to one determined outcome, the “cat alive branch” or the “cat dead branch”. Combine this with the string landscape, and our universe might have split into different “branches” for each possible stable state, each possible vacuum. But you can’t travel to those places, your experiences are still “just on one branch”. If they weren’t, many-worlds wouldn’t be an interpretation, it would just be obviously wrong.

In River of Gods, the AI manipulates a power company into using a particle accelerator to make a bubble of a different vacuum in the landscape. Surprisingly, that isn’t impossible. Making a bubble like that is a bit like what the Large Hadron Collider does, but on a much larger scale. When the Large Hadron Collider detected a Higgs boson, it had created a small ripple in the Higgs field, a small deviation from its default state. One could imagine a bigger ripple doing more: with vastly more energy, maybe you could force the Higgs all the way to a different default, a new vacuum in its landscape of possibilities.

Doing that doesn’t create a portal to another world, though. It destroys our world.

That bubble of a different vacuum isn’t another branch of quantum many-worlds, and it isn’t a far-off big bang from eternal inflation. It’s a part of our own universe, one with a different “default state” where the particles we’re made of can’t exist. And typically, a bubble like that spreads at the speed of light.

In the story, they have a way to stabilize the bubble, stop it from growing or shrinking. That’s at least vaguely believable. But it means that their “portal to another world” is just a little bubble in the middle of a big expensive device. Maybe the AI can live there happily…until the humans pull the plug.

Or maybe they can’t stabilize it, and the bubble spreads and spreads at the speed of light destroying everything. That would certainly be another way for the AI to live without human interference. It’s a bit less peaceful than advertised, though.

Which Things Exist in Quantum Field Theory

If you ever think metaphysics is easy, learn a little quantum field theory.

Someone asked me recently about virtual particles. When talking to the public, physicists sometimes explain the behavior of quantum fields with what they call “virtual particles”. They’ll describe forces coming from virtual particles going back and forth, or a bubbling sea of virtual particles and anti-particles popping out of empty space.

The thing is, this is a metaphor. What’s more, it’s a metaphor for an approximation. As physicists, when we draw diagrams with more and more virtual particles, we’re trying to use something we know how to calculate with (particles) to understand something tougher to handle (interacting quantum fields). Virtual particles, at least as you’re probably picturing them, don’t really exist.

I don’t really blame physicists for talking like that, though. Virtual particles are a metaphor, sure, a way to talk about a particular calculation. But so is basically anything we can say about quantum field theory. In quantum field theory, it’s pretty tough to say which things “really exist”.

I’ll start with an example, neutrino oscillation.

You might have heard that there are three types of neutrinos, corresponding to the three “generations” of the Standard Model: electron-neutrinos, muon-neutrinos, and tau-neutrinos. Each is produced in particular kinds of reactions: electron-neutrinos, for example, get produced by beta-plus decay, when a proton turns into a neutron, an anti-electron, and an electron-neutrino.

Leave these neutrinos alone though, and something strange happens. Detect what you expect to be an electron-neutrino, and it might have changed into a muon-neutrino or a tau-neutrino. The neutrino oscillated.

Why does this happen?

One way to explain it is to say that electron-neutrinos, muon-neutrinos, and tau-neutrinos don’t “really exist”. Instead, what really exists are neutrinos with specific masses. These don’t have catchy names, so let’s just call them neutrino-one, neutrino-two, and neutrino-three. What we think of as electron-neutrinos, muon-neutrinos, and tau-neutrinos are each some mix (a quantum superposition) of these “really existing” neutrinos, specifically the mixes that interact nicely with electrons, muons, and tau leptons respectively. When you let them travel, it’s these neutrinos that do the traveling, and due to quantum effects that I’m not explaining here you end up with a different mix than you started with.

This probably seems like a perfectly reasonable explanation. But it shouldn’t. Because if you take one of these mass-neutrinos, and interact with an electron, or a muon, or a tau, then suddenly it behaves like a mix of the old electron-neutrinos, muon-neutrinos, and tau-neutrinos.

That’s because both explanations are trying to chop the world up in a way that can’t be done consistently. There aren’t electron-neutrinos, muon-neutrinos, and tau-neutrinos, and there aren’t neutrino-ones, neutrino-twos, and neutrino-threes. There’s a mathematical object (a vector space) that can look like either.

Whether you’re comfortable with that depends on whether you think of mathematical objects as “things that exist”. If you aren’t, you’re going to have trouble thinking about the quantum world. Maybe you want to take a step back, and say that at least “fields” should exist. But that still won’t do: we can redefine fields, add them together or even use more complicated functions, and still get the same physics. The kinds of things that exist can’t be like this. Instead you end up invoking another kind of mathematical object, equivalence classes.

If you want to be totally rigorous, you have to go a step further. You end up thinking of physics in a very bare-bones way, as the set of all observations you could perform. Instead of describing the world in terms of “these things” or “those things”, the world is a black box, and all you’re doing is finding patterns in that black box.

Is there a way around this? Maybe. But it requires thought, and serious philosophy. It’s not intuitive, it’s not easy, and it doesn’t lend itself well to 3d animations in documentaries. So in practice, whenever anyone tells you about something in physics, you can be pretty sure it’s a metaphor. Nice describable, non-mathematical things typically don’t exist.

To Elliptics and Beyond!

I’ve been busy running a conference this week, Elliptics and Beyond.

After Amplitudes was held online this year, a few of us at the Niels Bohr Institute were inspired. We thought this would be the perfect time to hold a small online conference, focused on the Calabi-Yaus that have been popping up lately in Feynman diagrams. Then we heard from the organizers of Elliptics 2020. They had been planning to hold a conference in Mainz about elliptic integrals in Feynman diagrams, but had to postpone it due to the pandemic. We decided to team up and hold a joint conference on both topics: the elliptic integrals that are just starting to be understood, and the mysterious integrals that lie beyond. Hence, Elliptics and Beyond.

I almost suggested Buzz Lightyear for the logo but I chickened out

The conference has been fun thus far. There’s been a mix of review material bringing people up to speed on elliptic integrals and exciting new developments. Some are taking methods that have been successful in other areas and generalizing them to elliptic integrals, others have been honing techniques for elliptics to make them “production-ready”. A few are looking ahead even further, to higher-genus amplitudes in string theory and Calabi-Yaus in Feynman diagrams.

We organized the conference along similar lines to Zoomplitudes, but with a few experiments of our own. Like Zoomplitudes, we made a Slack space for the conference, so people could chat physics outside the talks. Ours was less active, though. I suspect that kind of space needs a critical mass of people, and with a smaller conference we may just not have gotten there. Having fewer people did allow us a more relaxed schedule, which in turn meant we could mostly keep things on-time. We had discussion sessions in the morning (European time), with talks in the afternoon, so almost everyone could make the talks at least. We also had a “conference dinner”, which went much better than I would have expected. We put people randomly into Zoom Breakout Rooms of five or six, to emulate the tables of an in-person conference, and folks chatted while eating their (self-brought of course) dinner. People seemed to really enjoy the chance to just chat casually with the other folks at the conference. If you’re organizing an online conference soon, I’d recommend trying it!

Holding a conference online means that a lot of people can attend who otherwise couldn’t. We had over a hundred people register, and while not all of them showed up there were typically fifty or sixty people on the Zoom session. Some of these were specialists in elliptics or Calabi-Yaus who wouldn’t ordinarily make it to a conference like this. Others were people from the rest of the amplitudes field who joined for parts of the conference that caught their eye. But surprisingly many weren’t even amplitudeologists, but students and young researchers in a variety of topics from all over the world. Some seemed curious and eager to learn, others I suspect just needed to say they had been to a conference. Both are responding to a situation where suddenly conference after conference is available online, free to join. It will be interesting to see if, and how, the world adapts.