Tag Archives: theoretical physics

Rooting out the Answer

I have a new paper out today, with Jacob Bourjaily, Andrew McLeod, Matthias Wilhelm, Cristian Vergu and Matthias Volk.

There’s a story I’ve told before on this blog, about a kind of “alphabet” for particle physics predictions. When we try to make a prediction in particle physics, we need to do complicated integrals. Sometimes, these integrals simplify dramatically, in unexpected ways. It turns out we can understand these simplifications by writing the integrals in a sort of “alphabet”, breaking complicated mathematical “periods” into familiar logarithms. If we want to simplify an integral, we can use relations between logarithms like these:

\log(a b)=\log(a)+\log(b),\quad \log(a^n)=n\log(a)

to factor our “alphabet” into pieces as simple as possible.

The simpler the alphabet, the more progress you can make. And in the nice toy model theory we’re working with, the alphabets so far have been simple in one key way. Expressed in the right variables, they’re rational. For example, they contain no square roots.

Would that keep going? Would we keep finding rational alphabets? Or might the alphabets, instead, have square roots?

After some searching, we found a clean test case. There was a calculation we could do with just two Feynman diagrams. All we had to do was subtract one from the other. If they still had square roots in their alphabet, we’d have proven that the nice, rational alphabets eventually had to stop.

Easy-peasy

So we calculated these diagrams, doing the complicated integrals. And we found they did indeed have square roots in their alphabet, in fact many more than expected. They even had square roots of square roots!

You’d think that would be the end of the story. But square roots are trickier than you’d expect.

Remember that to simplify these integrals, we break them up into an alphabet, and factor the alphabet. What happens when we try to do that with an alphabet that has square roots?

Suppose we have letters in our alphabet with \sqrt{-5}. Suppose another letter is the number 9. You might want to factor it like this:

9=3\times 3

Simple, right? But what if instead you did this:

9=(2+ \sqrt{-5} )\times(2- \sqrt{-5} )

Once you allow \sqrt{-5} in the game, you can factor 9 in two different ways. The central assumption, that you can always just factor your alphabet, breaks down. In mathematical terms, you no longer have a unique factorization domain.

Instead, we had to get a lot more mathematically sophisticated, factoring into something called prime ideals. We got that working and started crunching through the square roots in our alphabet. Things simplified beautifully: we started with a result that was ten million terms long, and reduced it to just five thousand. And at the end of the day, after subtracting one integral from the other…

We found no square roots!

After all of our simplifications, all the letters we found were rational. Our nice test case turned out much, much simpler than we expected.

It’s been a long road on this calculation, with a lot of false starts. We were kind of hoping to be the first to find square root letters in these alphabets; instead it looks like another group will beat us to the punch. But we developed a lot of interesting tricks along the way, and we thought it would be good to publish our “null result”. As always in our field, sometimes surprising simplifications are just around the corner.

Knowing When to Hold/Fold ‘Em in Science

The things one learns from Wikipedia. For example, today I learned that the country song “The Gambler” was selected for preservation by the US Library of Congress as being “culturally, historically, or artistically significant.”

You’ve got to know when to hold ’em, know when to fold ’em,

Know when to walk away, know when to run.

Knowing when to “hold ’em” or “fold ’em” is important in life in general, but it’s particularly important in science.

And not just on poker night

As scientists, we’re often trying to do something no-one else has done before. That’s exciting, but it’s risky too: sometimes whatever we’re trying simply doesn’t work. In those situations, it’s important to recognize when we aren’t making progress, and change tactics. The trick is, we can’t give up too early either: science is genuinely hard, and sometimes when we feel stuck we’re actually close to the finish line. Knowing which is which, when to “hold” and when to “fold”, is an essential skill, and a hard one to learn.

Sometimes, we can figure this out mathematically. Computational complexity theory classifies calculations by how difficult they are, including how long they take. If you can estimate how much time you should take to do a calculation, you can decide whether you’ll finish it in a reasonable amount of time. If you just want a rough guess, you can do a simpler version of the calculation, and see how long that takes, then estimate how much longer the full one will. If you figure out you’re doomed, then it’s time to switch to a more efficient algorithm, or a different question entirely.

Sometimes, we don’t just have to consider time, but money as well. If you’re doing an experiment, you have to estimate how much the equipment will cost, and how much it will cost to run it. Experimenters get pretty good at estimating these things, but they still screw up sometimes and run over budget. Occasionally this is fine: LIGO didn’t detect anything in its first eight-year run, but they upgraded the machines and tried again, and won a Nobel prize. Other times it’s a disaster, and money keeps being funneled into a project that never works. Telling the difference is crucial, and it’s something we as a community are still not so good at.

Sometimes we just have to follow our instincts. This is dangerous, because we have a bias (the “sunk cost fallacy”) to stick with something if we’ve already spent a lot of time or money on it. To counteract that, it’s good to cultivate a bias in the opposite direction, which you might call “scientific impatience”. Getting frustrated with slow progress may not seem productive, but it keeps you motivated to search for a better way. Experienced scientists get used to how long certain types of project take. Too little progress, and they look for another option. This can fail, killing a project that was going to succeed, but it can also prevent over-investment in a failing idea. Only a mix of instincts keeps the field moving.

In the end, science is a gamble. Like the song, we have to know when to hold ’em and fold ’em, when to walk away, and when to run an idea as far as it will go. Sometimes it works, and sometimes it doesn’t. That’s science.

Calabi-Yaus in Feynman Diagrams: Harder and Easier Than Expected

I’ve got a new paper up, about the weird geometrical spaces we keep finding in Feynman diagrams.

With Jacob Bourjaily, Andrew McLeod, and Matthias Wilhelm, and most recently Cristian Vergu and Matthias Volk, I’ve been digging up odd mathematics in particle physics calculations. In several calculations, we’ve found that we need a type of space called a Calabi-Yau manifold. These spaces are often studied by string theorists, who hope they represent how “extra” dimensions of space are curled up. String theorists have found an absurdly large number of Calabi-Yau manifolds, so many that some are trying to sift through them with machine learning. We wanted to know if our situation was quite that ridiculous: how many Calabi-Yaus do we really need?

So we started asking around, trying to figure out how to classify our catch of Calabi-Yaus. And mostly, we just got confused.

It turns out there are a lot of different tools out there for understanding Calabi-Yaus, and most of them aren’t all that useful for what we’re doing. We went in circles for a while trying to understand how to desingularize toric varieties, and other things that will sound like gibberish to most of you. In the end, though, we noticed one small thing that made our lives a whole lot simpler.

It turns out that all of the Calabi-Yaus we’ve found are, in some sense, the same. While the details of the physics varies, the overall “space” is the same in each case. It’s a space we kept finding for our “Calabi-Yau bestiary”, but it turns out one of the “traintrack” diagrams we found earlier can be written in the same way. We found another example too, a “wheel” that seems to be the same type of Calabi-Yau.

And that actually has a sensible name

We also found many examples that we don’t understand. Add another rung to our “traintrack” and we suddenly can’t write it in the same space. (Personally, I’m quite confused about this one.) Add another spoke to our wheel and we confuse ourselves in a different way.

So while our calculation turned out simpler than expected, we don’t think this is the full story. Our Calabi-Yaus might live in “the same space”, but there are also physics-related differences between them, and these we still don’t understand.

At some point, our abstract included the phrase “this paper raises more questions than it answers”. It doesn’t say that now, but it’s still true. We wrote this paper because, after getting very confused, we ended up able to say a few new things that hadn’t been said before. But the questions we raise are if anything more important. We want to inspire new interest in this field, toss out new examples, and get people thinking harder about the geometry of Feynman integrals.

The Changing Meaning of “Explain”

This is another “explanations are weird” post.

I’ve been reading a biography of James Clerk Maxwell, who formulated the theory of electromagnetism. Nowadays, we think about the theory in terms of fields: we think there is an “electromagnetic field”, filling space and time. At the time, though, this was a very unusual way to think, and not even Maxwell was comfortable with it. He felt that he had to present a “physical model” to justify the theory: a picture of tiny gears and ball bearings, somehow occupying the same space as ordinary matter.

Bang! Bang! Maxwell’s silver bearings…

Maxwell didn’t think space was literally filled with ball bearings. He did, however, believe he needed a picture that was sufficiently “physical”, that wasn’t just “mathematics”. Later, when he wrote down a theory that looked more like modern field theory, he still thought of it as provisional: a way to use Lagrange’s mathematics to ignore the unknown “real physical mechanism” and just describe what was observed. To Maxwell, field theory was a description, but not an explanation.

This attitude surprised me. I would have thought physicists in Maxwell’s day could have accepted fields. After all, they had accepted Newton.

In his time, there was quite a bit of controversy about whether Newton’s theory of gravity was “physical”. When rival models described planets driven around by whirlpools, Newton simply described the mathematics of the force, an “action at a distance”. Newton famously insisted hypotheses non fingo, “I feign no hypotheses”, and insisted that he wasn’t saying anything about why gravity worked, merely how it worked. Over time, as the whirlpool models continued to fail, people gradually accepted that gravity could be explained as action at a distance.

You’d think that this would make them able to accept fields as well. Instead, by Maxwell’s day the options for a “physical explanation” had simply been enlarged by one. Now instead of just explaining something with mechanical parts, you could explain it with action at a distance as well. Indeed, many physicists tried to explain electricity and magnetism with some sort of gravity-like action at a distance. They failed, though. You really do need fields.

The author of the biography is an engineer, not a physicist, so I find his perspective unusual at times. After discussing Maxwell’s discomfort with fields, the author says that today physicists are different: instead of insisting on a physical explanation, they accept that there are some things they just cannot know.

At first, I wanted to object: we do have physical explanations, we explain things with fields! We have electromagnetic fields and electron fields, gluon fields and Higgs fields, even a gravitational field for the shape of space-time. These fields aren’t papering over some hidden mechanism, they are the mechanism!

Are they, though?

Fields aren’t quite like the whirlpools and ball bearings of historical physicists. Sometimes fields that look different are secretly the same: the two “different explanations” will give the same result for any measurement you could ever perform. In my area of physics, we try to avoid this by focusing on the measurements instead, building as much as we can out of observable quantities instead of fields. In effect we’re going back yet another layer, another dose of hypotheses non fingo.

Physicists still ask for “physical explanations”, and still worry that some picture might be “just mathematics”. But what that means has changed, and continues to change. I don’t think we have a common standard right now, at least nothing as specific as “mechanical parts or action at a distance, and nothing else”. Somehow, we still care about whether we’ve given an explanation, or just a description, even though we can’t define what an explanation is.

Congratulations to Simon Caron-Huot and Pedro Vieira for the New Horizons Prize!

The 2020 Breakthrough Prizes were announced last week, awards in physics, mathematics, and life sciences. The physics prize was awarded to the Event Horizon Telescope, with the $3 million award to be split among the 347 members of the collaboration. The Breakthrough Prize Foundation also announced this year’s New Horizons prizes, six smaller awards of $100,000 each to younger researchers in physics and math. One of those awards went to two people I know, Simon Caron-Huot and Pedro Vieira. Extremely specialized as I am, I hope no-one minds if I ignore all the other awards and talk about them.

The award for Caron-Huot and Vieira is “For profound contributions to the understanding of quantum field theory.” Indeed, both Simon and Pedro have built their reputations as explorers of quantum field theories, the kind of theories we use in particle physics. Both have found surprising behavior in these theories, where a theory people thought they understood did something quite unexpected. Both also developed new calculation methods, using these theories to compute things that were thought to be out of reach. But this is all rather vague, so let me be a bit more specific about each of them:

Simon Caron-Huot is known for his penetrating and mysterious insight. He has the ability to take a problem and think about it in a totally original way, coming up with a solution that no-one else could have thought of. When I first worked with him, he took a calculation that the rest of us would have taken a month to do and did it by himself in a week. His insight seems to come in part from familiarity with the physics literature, forgotten papers from the 60’s and 70’s that turn out surprisingly useful today. Largely, though, his insight is his own, an inimitable style that few can anticipate. His interests are broad, from exotic toy models to well-tested theories that describe the real world, covering a wide range of methods and approaches. Physicists tend to describe each other in terms of standard “virtues”: depth and breadth, knowledge and originality. Simon somehow seems to embody all of them.

Pedro Vieira is mostly known for his work with integrable theories. These are theories where if one knows the right trick one can “solve” the theory exactly, rather than using the approximations that physicists often rely on. Pedro was a mentor to me when I was a postdoc at the Perimeter Institute, and one thing he taught me was to always expect more. When calculating with computer code I would wait hours for a result, while Pedro would ask “why should it take hours?”, and if we couldn’t propose a reason would insist we find a quicker way. This attitude paid off in his research, where he has used integrable theories to calculate things others would have thought out of reach. His Pentagon Operator Product Expansion, or “POPE”, uses these tricks to calculate probabilities that particles collide, and more recently he pushed further to other calculations with a hexagon-based approach (which one might call the “HOPE”). Now he’s working on “bootstrapping” up complicated theories from simple physical principles, once again asking “why should this be hard?”

In Defense of the Streetlight

If you read physics blogs, you’ve probably heard this joke before:

A policeman sees a drunk man searching for something under a streetlight and asks what the drunk has lost. He says he lost his keys and they both look under the streetlight together. After a few minutes the policeman asks if he is sure he lost them here, and the drunk replies, no, and that he lost them in the park. The policeman asks why he is searching here, and the drunk replies, “this is where the light is”.

The drunk’s line of thinking has a name, the streetlight effect, and while it may seem ridiculous it’s a common error, even among experts. When it gets too tough to research something, scientists will often be tempted by an easier problem even if it has little to do with the original question. After all, it’s “where the light is”.

Physicists get accused of this all the time. Dark matter could be completely undetectable on Earth, but physicists still build experiments to search for it. Our universe appears to be curved one way, but string theory makes it much easier to study universes curved the other way, so physicists write a lot of nice proofs about a universe we don’t actually inhabit. In my own field, we spend most of our time studying a very nice theory that we know can’t describe the real world.

I’m not going to defend this behavior in general. There are real cases where scientists trick themselves into thinking they can solve an easy problem when they need to solve a hard one. But there is a crucial difference between scientists and drunkards looking for their keys, one that makes this behavior a lot more reasonable: scientists build technology.

As scientists, we can’t just grope around in the dark for our keys. The spaces we’re searching, from the space of all theories of gravity to actual outer space, are much too vast to search randomly. We need new ideas, new mathematics or new equipment, to do the search properly. If we were the drunkard of the story, we’d need to invent night-vision goggles.

Is the light better here, or is it just me?

Suppose you wanted to design new night-vision goggles, to search for your keys in the park. You could try to build them in the dark, but you wouldn’t be able to see what you were doing: you’d lose pieces, miss screws, and break lenses. Much better to build the goggles under that convenient streetlight.

Of course, if you build and test your prototype goggles under the streetlight, you risk that they aren’t good enough for the dark. You’ll have calibrated them in an unrealistic case. In all likelihood, you’ll have to go back and fix your goggles, tweaking them as you go, and you’ll run into the same problem: you can’t see what you’re doing in the dark.

At that point, though, you have an advantage: you now know how to build night-vision goggles. You’ve practiced building goggles in the light, and now even if the goggles aren’t good enough, you remember how you put them together. You can tweak the process, modify your goggles, and make something good enough to find your keys. You’re good enough at making goggles that you can modify them now, even in the dark.

Sometimes scientists really are like the drunk, searching under the most convenient streetlight. Sometimes, though, scientists are working where the light is for a reason. Instead of wasting their time lost in the dark, they’re building new technology and practicing new methods, getting better and better at searching until, when they’re ready, they can go back and find their keys. Sometimes, the streetlight is worth it.

“X Meets Y” Conferences

Most conferences focus on a specific sub-field. If you call a conference “Strings” or “Amplitudes”, people know what to expect. Likewise if you focus on something more specific, say Elliptic Integrals. But what if your conference is named after two sub-fields?

These conferences, with names like “QCD Meets Gravity” and “Scattering Amplitudes and the Conformal Bootstrap”, try to connect two different sub-fields together. I’ll call them “X Meets Y” conferences.

The most successful “X Meets Y” conferences involve two sub-fields that have been working together for quite some time. At that point, you don’t just have “X” researchers and “Y” researchers, but “X and Y” researchers, people who work on the connection between both topics. These people can glue a conference together, showing the separate “X” and “Y” researchers what “X and Y” research looks like. At a conference like that speakers have a clear idea of what to talk about: the “X” researchers know how to talk to the “Y” researchers, and vice versa, and the organizers can invite speakers who they know can talk to both groups.

If the sub-fields have less history of collaboration, “X Meets Y” conferences become trickier. You need at least a few “X and Y” researchers (or at least aspiring “X and Y” researchers) to guide the way. Even if most of the “X” researchers don’t know how to talk to “Y” researchers, the “X and Y” researchers can give suggestions, telling “X” which topics would be most interesting to “Y” and vice versa. With that kind of guidance, “X Meets Y” conferences can inspire new directions of research, opening one field up to the tools of another.

The biggest risk in an “X Meets Y” conference, that becomes more likely the fewer “X and Y” researchers there are, is that everyone just gives their usual talks. The “X” researchers talk about their “X”, and the “Y” researchers talk about their “Y”, and both groups nod politely and go home with no new ideas whatsoever. Scientists are fundamentally lazy creatures. If we already have a talk written, we’re tempted to use it, even if it doesn’t quite fit the occasion. Counteracting that is a tough job, and one that isn’t always feasible.

“X Meets Y” conferences can be very productive, the beginning of new interdisciplinary ideas. But they’re certainly hard to get right. Overall, they’re one of the trickier parts of the social side of science.