# This Is What an Exponential Feels Like

Most people, when they picture exponential growth, think of speed. They think of something going faster and faster, more and more out of control. But in the beginning, exponential growth feels slow. A little bit leads to a little bit more, leads to a little bit more. It sneaks up on you.

When the first cases of COVID-19 were observed in China in December, I didn’t hear about it. If it was in the news, it wasn’t news I read.

I’d definitely heard about it by the end of January. A friend of mine had just gotten back from a trip to Singapore. At the time, Singapore had a few cases from China, but no local transmission. She decided to work from home for two weeks anyway, just to be safe. The rest of us chatted around tea at work, shocked at the measures China was taking to keep the virus under control.

Italy reached our awareness in February. My Italian friends griped and joked about the situation. Denmark’s first case was confirmed on February 27, a traveler returning from Italy. He was promptly quarantined.

I was scheduled to travel on March 8, to a conference in Hamburg. On March 2, six days before, they decided to postpone. I was surprised: Hamburg is on the opposite side of Germany from Italy.

That week, my friend who went to Singapore worked from home again. This time, she wasn’t worried she brought the virus from Singapore: she was worried she might pick it up in Denmark. I was surprised: with so few cases (23 by March 6) in a country with a track record of thorough quarantines, I didn’t think we had anything to worry about. She disagreed. She remembered what happened in Singapore.

That was Saturday, March 7. Monday evening, she messaged me again. The number of cases had risen to 90. Copenhagen University asked everyone who traveled to a “high-risk” region to stay home for fourteen days.

On Wednesday, the university announced new measures. They shut down social events, large meetings, and work-related travel. Classes continued, but students were asked to sit as far as possible from each other. The Niels Bohr Institute was more strict: employees were asked to work from home, and classes were asked to switch online. The canteen would stay open, but would only sell packaged food.

The new measures lasted a day. On Thursday, the government of Denmark announced a lockdown, starting Friday. Schools were closed for two weeks, and public sector employees were sent to work from home. On Saturday, they closed the borders. There were 836 confirmed cases.

Exponential growth is the essence of life…but not of daily life. It’s been eerie, seeing the world around me change little by little and then lots by lots. I’m not worried for my own health. I’m staying home regardless. I know now what an exponential feels like.

P.S.: This blog has a no-politics policy. Please don’t comment on what different countries or politicians should be doing, or who you think should be blamed. Viruses have enough effect on the world right now, let’s keep viral arguments out of the comment section.

# Understanding Is Translation

Kernighan’s Law states, “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” People sometimes make a similar argument about philosophy of mind: “The attempt of the mind to analyze itself [is] an effort analogous to one who would lift himself by his own bootstraps.”

Both points operate on a shared kind of logic. They picture understanding something as modeling it in your mind, with every detail clear. If you’ve already used all your mind’s power to design code, you won’t be able to model when it goes wrong. And modeling your own mind is clearly nonsense, you would need an even larger mind to hold the model.

The trouble is, this isn’t really how understanding works. To understand something, you don’t need to hold a perfect model of it in your head. Instead, you translate it into something you can more easily work with. Like explanations, these translations can be different for different people.

To understand something, I need to know the algorithm behind it. I want to know how to calculate it, the pieces that go in and where they come from. I want to code it up, to test it out on odd cases and see how it behaves, to get a feel for what it can do.

Others need a more physical picture. They need to know where the particles are going, or how energy and momentum are conserved. They want entropy to be increased, action to be minimized, scales to make sense dimensionally.

Others in turn are more mathematical. They want to start with definitions and axioms. To understand something, they want to see it as an example of a broader class of thing, groups or algebras or categories, to fit it into a bigger picture.

Each of these are a kind of translation, turning something into code-speak or physics-speak or math-speak. They don’t require modeling every detail, but when done well they can still explain every detail.

So while yes, it is good practice not to write code that is too “smart”, and too hard to debug…it’s not impossible to debug your smartest code. And while you can’t hold an entire mind inside of yours, you don’t actually need to do that to understand the brain. In both cases, all you need is a translation.

# You Can’t Anticipate a Breakthrough

As a scientist, you’re surrounded by puzzles. For every test and every answer, ten new questions pop up. You can spend a lifetime on question after question, never getting bored.

But which questions matter? If you want to change the world, if you want to discover something deep, which questions should you focus on? And which should you ignore?

Last year, my collaborators and I completed a long, complicated project. We were calculating the chance fundamental particles bounce off each other in a toy model of nuclear forces, pushing to very high levels of precision. We managed to figure out a lot, but as always, there were many questions left unanswered in the end.

The deepest of these questions came from number theory. We had noticed surprising patterns in the numbers that showed up in our calculation, reminiscent of the fancifully-named Cosmic Galois Theory. Certain kinds of numbers never showed up, while others appeared again and again. In order to see these patterns, though, we needed an unusual fudge factor: an unexplained number multiplying our result. It was clear that there was some principle at work, a part of the physics intimately tied to particular types of numbers.

There were also questions that seemed less deep. In order to compute our result, we compared to predictions from other methods: specific situations where the question becomes simpler and there are other ways of calculating the answer. As we finished writing the paper, we realized we could do more with some of these predictions. There were situations we didn’t use that nonetheless simplified things, and more predictions that it looked like we could make. By the time we saw these, we were quite close to publishing, so most of us didn’t have the patience to follow these new leads. We just wanted to get the paper out.

At the time, I expected the new predictions would lead, at best, to more efficiency. Maybe we could have gotten our result faster, or cleaned it up a bit. They didn’t seem essential, and they didn’t seem deep.

Fast forward to this year, and some of my collaborators (specifically, Lance Dixon and Georgios Papathanasiou, along with Benjamin Basso) have a new paper up: “The Origin of the Six-Gluon Amplitude in Planar N=4 SYM”. The “origin” in their title refers to one of those situations: when the variables in the problem are small, and you’re close to the “origin” of a plot in those variables. But the paper also sheds light on the origin of our calculation’s mysterious “Cosmic Galois” behavior.

It turns out that the origin (of the plot) can be related to another situation, when the paths of two particles in our calculation almost line up. There, the calculation can be done with another method, called the Pentagon Operator Product Expansion, or POPE. By relating the two, Basso, Dixon, and Papathanasiou were able to predict not only how our result should have behaved near the origin, but how more complicated as-yet un-calculated results should behave.

The biggest surprise, though, lurked in the details. Building their predictions from the POPE method, they found their calculation separated into two pieces: one which described the physics of the particles involved, and a “normalization”. This normalization, predicted by the POPE method, involved some rather special numbers…the same as the “fudge factor” we had introduced earlier! Somehow, the POPE’s physics-based setup “knows” about Cosmic Galois Theory!

It seems that, by studying predictions in this specific situation, Basso, Dixon, and Papathanasiou have accomplished something much deeper: a strong hint of where our mysterious number patterns come from. It’s rather humbling to realize that, were I in their place, I never would have found this: I had assumed “the origin” was just a leftover detail, perhaps useful but not deep.

I’m still digesting their result. For now, it’s a reminder that I shouldn’t try to pre-judge questions. If you want to learn something deep, it isn’t enough to sit thinking about it, just focusing on that one problem. You have to follow every lead you have, work on every problem you can, do solid calculation after solid calculation. Sometimes, you’ll just make incremental progress, just fill in the details. But occasionally, you’ll have a breakthrough, something that justifies the whole adventure and opens your door to something strange and new. And I’m starting to think that when it comes to breakthroughs, that’s always been the only way there.

# Math Is the Art of Stating Things Clearly

Why do we use math?

In physics we describe everything, from the smallest of particles to the largest of galaxies, with the language of mathematics. Why should that one field be able to describe so much? And why don’t we use something else?

The truth is, this is a trick question. Mathematics isn’t a language like English or French, where we can choose whichever translation we want. We use mathematics because it is, almost by definition, the best choice. That is because mathematics is the art of stating things clearly.

An infinite number of mathematicians walk into a bar. The first orders a beer. The second orders half a beer. The third orders a quarter. The bartender stops them, pours two beers, and says “You guys should know your limits.”

That was an (old) joke about infinite series of numbers. You probably learned in high school that if you add up one plus a half plus a quarter…you eventually get two. To be a bit more precise:

$\sum_{i=0}^\infty \frac{1}{2^i} = 1+\frac{1}{2}+\frac{1}{4}+\ldots=2$

We say that this infinite sum limits to two.

But what does it actually mean for an infinite sum to limit to a number? What does it mean to sum infinitely many numbers, let alone infinitely many beers ordered by infinitely many mathematicians?

You’re asking these questions because I haven’t yet stated the problem clearly. Those of you who’ve learned a bit more mathematics (maybe in high school, maybe in college) will know another way of stating it.

You know how to sum a finite set of beers. You start with one beer, then one and a half, then one and three-quarters. Sum $N$ beers, and you get

$\sum_{i=0}^N \frac{1}{2^i}$

What does it mean for the sum to limit to two?

Let’s say you just wanted to get close to two. You want to get $\epsilon$ close, where epsilon is the Greek letter we use for really small numbers.

For every $\epsilon>0$ you choose, no matter how small, I can pick a (finite!) $N$ and get at least that close. That means that, with higher and higher $N$, I can get as close to two as a I want.

As it turns out, that’s what it means for a sum to limit to two. It’s saying the same thing, but more clearly, without sneaking in confusing claims about infinity.

These sort of proofs, with $\epsilon$ (and usually another variable, $\delta$) form what mathematicians view as the foundations of calculus. They’re immortalized in story and song.

And they’re not even the clearest way of stating things! Go down that road, and you find more mathematics: definitions of numbers, foundations of logic, rabbit holes upon rabbit holes, all from the effort to state things clearly.

That’s why I’m not surprised that physicists use mathematics. We have to. We need clarity, if we want to understand the world. And mathematicians, they’re the people who spend their lives trying to state things clearly.

I have a new paper out today, with Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, Cristian Vergu and Matthias Volk.

There’s a story I’ve told before on this blog, about a kind of “alphabet” for particle physics predictions. When we try to make a prediction in particle physics, we need to do complicated integrals. Sometimes, these integrals simplify dramatically, in unexpected ways. It turns out we can understand these simplifications by writing the integrals in a sort of “alphabet”, breaking complicated mathematical “periods” into familiar logarithms. If we want to simplify an integral, we can use relations between logarithms like these:

$\log(a b)=\log(a)+\log(b),\quad \log(a^n)=n\log(a)$

to factor our “alphabet” into pieces as simple as possible.

The simpler the alphabet, the more progress you can make. And in the nice toy model theory we’re working with, the alphabets so far have been simple in one key way. Expressed in the right variables, they’re rational. For example, they contain no square roots.

Would that keep going? Would we keep finding rational alphabets? Or might the alphabets, instead, have square roots?

After some searching, we found a clean test case. There was a calculation we could do with just two Feynman diagrams. All we had to do was subtract one from the other. If they still had square roots in their alphabet, we’d have proven that the nice, rational alphabets eventually had to stop.

So we calculated these diagrams, doing the complicated integrals. And we found they did indeed have square roots in their alphabet, in fact many more than expected. They even had square roots of square roots!

You’d think that would be the end of the story. But square roots are trickier than you’d expect.

Remember that to simplify these integrals, we break them up into an alphabet, and factor the alphabet. What happens when we try to do that with an alphabet that has square roots?

Suppose we have letters in our alphabet with $\sqrt{-5}$. Suppose another letter is the number 9. You might want to factor it like this:

$9=3\times 3$

Simple, right? But what if instead you did this:

$9=(2+ \sqrt{-5} )\times(2- \sqrt{-5} )$

Once you allow $\sqrt{-5}$ in the game, you can factor 9 in two different ways. The central assumption, that you can always just factor your alphabet, breaks down. In mathematical terms, you no longer have a unique factorization domain.

Instead, we had to get a lot more mathematically sophisticated, factoring into something called prime ideals. We got that working and started crunching through the square roots in our alphabet. Things simplified beautifully: we started with a result that was ten million terms long, and reduced it to just five thousand. And at the end of the day, after subtracting one integral from the other…

We found no square roots!

After all of our simplifications, all the letters we found were rational. Our nice test case turned out much, much simpler than we expected.

It’s been a long road on this calculation, with a lot of false starts. We were kind of hoping to be the first to find square root letters in these alphabets; instead it looks like another group will beat us to the punch. But we developed a lot of interesting tricks along the way, and we thought it would be good to publish our “null result”. As always in our field, sometimes surprising simplifications are just around the corner.

# Calabi-Yaus in Feynman Diagrams: Harder and Easier Than Expected

I’ve got a new paper up, about the weird geometrical spaces we keep finding in Feynman diagrams.

With Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, and most recently Cristian Vergu and Matthias Volk, I’ve been digging up odd mathematics in particle physics calculations. In several calculations, we’ve found that we need a type of space called a Calabi-Yau manifold. These spaces are often studied by string theorists, who hope they represent how “extra” dimensions of space are curled up. String theorists have found an absurdly large number of Calabi-Yau manifolds, so many that some are trying to sift through them with machine learning. We wanted to know if our situation was quite that ridiculous: how many Calabi-Yaus do we really need?

So we started asking around, trying to figure out how to classify our catch of Calabi-Yaus. And mostly, we just got confused.

It turns out there are a lot of different tools out there for understanding Calabi-Yaus, and most of them aren’t all that useful for what we’re doing. We went in circles for a while trying to understand how to desingularize toric varieties, and other things that will sound like gibberish to most of you. In the end, though, we noticed one small thing that made our lives a whole lot simpler.

It turns out that all of the Calabi-Yaus we’ve found are, in some sense, the same. While the details of the physics varies, the overall “space” is the same in each case. It’s a space we kept finding for our “Calabi-Yau bestiary”, but it turns out one of the “traintrack” diagrams we found earlier can be written in the same way. We found another example too, a “wheel” that seems to be the same type of Calabi-Yau.

We also found many examples that we don’t understand. Add another rung to our “traintrack” and we suddenly can’t write it in the same space. (Personally, I’m quite confused about this one.) Add another spoke to our wheel and we confuse ourselves in a different way.

So while our calculation turned out simpler than expected, we don’t think this is the full story. Our Calabi-Yaus might live in “the same space”, but there are also physics-related differences between them, and these we still don’t understand.

At some point, our abstract included the phrase “this paper raises more questions than it answers”. It doesn’t say that now, but it’s still true. We wrote this paper because, after getting very confused, we ended up able to say a few new things that hadn’t been said before. But the questions we raise are if anything more important. We want to inspire new interest in this field, toss out new examples, and get people thinking harder about the geometry of Feynman integrals.

# Communicating the Continuum Hypothesis

I have a friend who is shall we say, pessimistic, about science communication. He thinks it’s too much risk for too little gain, too many misunderstandings while the most important stuff is so abstract the public will never understand it anyway. When I asked him for an example, he started telling me about a professor who works on the continuum hypothesis.

The continuum hypothesis is about different types of infinity. You might have thought there was only one type of infinity, but in the nineteenth century the mathematician Georg Cantor showed there were more, the most familiar of which are countable and uncountable. If you have a countably infinite number of things, then you can “count” them, “one, two, three…”, assigning a number to each one (even if, since they’re still infinite, you never actually finish). To imagine something uncountably infinite, think of a continuum, like distance on a meter stick, where you can always look at smaller and smaller distances. Cantor proved, using various ingenious arguments, that these two types of infinity are different: the continuum is “bigger” than a mere countable infinity.

Cantor wondered if there could be something in between, a type of infinity bigger than countable and smaller than uncountable. His hypothesis (now called the continuum hypothesis) was that there wasn’t: he thought there was no type of infinite between countable and uncountable.

(If you think you have an easy counterexample, you’re wrong. In particular, fractions are countable.)

Kurt Gödel didn’t prove the continuum hypothesis, but in 1940 he showed that at least it couldn’t be disproved, which you’d think would be good enough. In 1964, though, another mathematician named Paul Cohen showed that the continuum hypothesis also can’t be proved, at least with mathematicians’ usual axioms.

In science, if something can’t be proved or disproved, then we shrug our shoulders and say we don’t know. Math is different. In math, we choose the axioms. All we have to do is make sure they’re consistent.

What Cohen and Gödel really showed is that mathematics is consistent either way: if the continuum hypothesis is true or false, the rest of mathematics still works just as well. You can add it as an extra axiom, and add-on that gives you different types of infinity but doesn’t change everyday arithmetic.

You might think that this, finally, would be the end of the story. Instead, it was the beginning of a lively debate that continues to this day. It’s a debate that touches on what mathematics is for, whether infinity is merely a concept or something out there in the world, whether some axioms are right or wrong and what happens when you change them. It involves attempts to codify intuition, arguments about which rules “make sense” that blur the boundary between philosophy and mathematics. It also involves the professor my friend mentioned, W. H. Woodin.

Now, can I explain Woodin’s research to you?

No. I don’t understand it myself, it’s far more abstract and weird than any mathematics I’ve ever touched.

Despite that, I can tell you something about it. I can tell you about the quest he’s on, its history and its relevance, what is and is not at stake. I can get you excited, for the same reasons that I’m excited, I can show you it’s important for the same reasons I think it’s important. I can give you the “flavor” of the topic, and broaden your view of the world you live in, one containing a hundred-year conversation about the nature of infinity.

My friend is right that the public will never understand everything. I’ll never understand everything either. But what we can do, what I strive to do, is to appreciate this wide weird world in all its glory. That, more than anything, is why I communicate science.